Jan 27 21:42:50 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 27 21:42:50 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 27 21:42:50 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 21:42:50 localhost kernel: BIOS-provided physical RAM map:
Jan 27 21:42:50 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 27 21:42:50 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 27 21:42:50 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 27 21:42:50 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 27 21:42:50 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 27 21:42:50 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 27 21:42:50 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 27 21:42:50 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 27 21:42:50 localhost kernel: NX (Execute Disable) protection: active
Jan 27 21:42:50 localhost kernel: APIC: Static calls initialized
Jan 27 21:42:50 localhost kernel: SMBIOS 2.8 present.
Jan 27 21:42:50 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 27 21:42:50 localhost kernel: Hypervisor detected: KVM
Jan 27 21:42:50 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 27 21:42:50 localhost kernel: kvm-clock: using sched offset of 4246100487 cycles
Jan 27 21:42:50 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 27 21:42:50 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 27 21:42:50 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 27 21:42:50 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 27 21:42:50 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 27 21:42:50 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 27 21:42:50 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 27 21:42:50 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 27 21:42:50 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 27 21:42:50 localhost kernel: Using GB pages for direct mapping
Jan 27 21:42:50 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 27 21:42:50 localhost kernel: ACPI: Early table checksum verification disabled
Jan 27 21:42:50 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 27 21:42:50 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 21:42:50 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 21:42:50 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 21:42:50 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 27 21:42:50 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 21:42:50 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 27 21:42:50 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 27 21:42:50 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 27 21:42:50 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 27 21:42:50 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 27 21:42:50 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 27 21:42:50 localhost kernel: No NUMA configuration found
Jan 27 21:42:50 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 27 21:42:50 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 27 21:42:50 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 27 21:42:50 localhost kernel: Zone ranges:
Jan 27 21:42:50 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 27 21:42:50 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 27 21:42:50 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 21:42:50 localhost kernel:   Device   empty
Jan 27 21:42:50 localhost kernel: Movable zone start for each node
Jan 27 21:42:50 localhost kernel: Early memory node ranges
Jan 27 21:42:50 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 27 21:42:50 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 27 21:42:50 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 27 21:42:50 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 27 21:42:50 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 27 21:42:50 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 27 21:42:50 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 27 21:42:50 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 27 21:42:50 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 27 21:42:50 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 27 21:42:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 27 21:42:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 27 21:42:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 27 21:42:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 27 21:42:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 27 21:42:50 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 27 21:42:50 localhost kernel: TSC deadline timer available
Jan 27 21:42:50 localhost kernel: CPU topo: Max. logical packages:   8
Jan 27 21:42:50 localhost kernel: CPU topo: Max. logical dies:       8
Jan 27 21:42:50 localhost kernel: CPU topo: Max. dies per package:   1
Jan 27 21:42:50 localhost kernel: CPU topo: Max. threads per core:   1
Jan 27 21:42:50 localhost kernel: CPU topo: Num. cores per package:     1
Jan 27 21:42:50 localhost kernel: CPU topo: Num. threads per package:   1
Jan 27 21:42:50 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 27 21:42:50 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 27 21:42:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 27 21:42:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 27 21:42:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 27 21:42:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 27 21:42:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 27 21:42:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 27 21:42:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 27 21:42:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 27 21:42:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 27 21:42:50 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 27 21:42:50 localhost kernel: Booting paravirtualized kernel on KVM
Jan 27 21:42:50 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 27 21:42:50 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 27 21:42:50 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 27 21:42:50 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 27 21:42:50 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 27 21:42:50 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 27 21:42:50 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 21:42:50 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 27 21:42:50 localhost kernel: random: crng init done
Jan 27 21:42:50 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 27 21:42:50 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 27 21:42:50 localhost kernel: Fallback order for Node 0: 0 
Jan 27 21:42:50 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 27 21:42:50 localhost kernel: Policy zone: Normal
Jan 27 21:42:50 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 27 21:42:50 localhost kernel: software IO TLB: area num 8.
Jan 27 21:42:50 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 27 21:42:50 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 27 21:42:50 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 27 21:42:50 localhost kernel: Dynamic Preempt: voluntary
Jan 27 21:42:50 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 27 21:42:50 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 27 21:42:50 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 27 21:42:50 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 27 21:42:50 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 27 21:42:50 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 27 21:42:50 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 27 21:42:50 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 27 21:42:50 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 21:42:50 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 21:42:50 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 27 21:42:50 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 27 21:42:50 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 27 21:42:50 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 27 21:42:50 localhost kernel: Console: colour VGA+ 80x25
Jan 27 21:42:50 localhost kernel: printk: console [ttyS0] enabled
Jan 27 21:42:50 localhost kernel: ACPI: Core revision 20230331
Jan 27 21:42:50 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 27 21:42:50 localhost kernel: x2apic enabled
Jan 27 21:42:50 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 27 21:42:50 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 27 21:42:50 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 27 21:42:50 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 27 21:42:50 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 27 21:42:50 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 27 21:42:50 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 27 21:42:50 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 27 21:42:50 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 27 21:42:50 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 27 21:42:50 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 27 21:42:50 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 27 21:42:50 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 27 21:42:50 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 27 21:42:50 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 27 21:42:50 localhost kernel: x86/bugs: return thunk changed
Jan 27 21:42:50 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 27 21:42:50 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 27 21:42:50 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 27 21:42:50 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 27 21:42:50 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 27 21:42:50 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 27 21:42:50 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 27 21:42:50 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 27 21:42:50 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 27 21:42:50 localhost kernel: landlock: Up and running.
Jan 27 21:42:50 localhost kernel: Yama: becoming mindful.
Jan 27 21:42:50 localhost kernel: SELinux:  Initializing.
Jan 27 21:42:50 localhost kernel: LSM support for eBPF active
Jan 27 21:42:50 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 21:42:50 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 27 21:42:50 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 27 21:42:50 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 27 21:42:50 localhost kernel: ... version:                0
Jan 27 21:42:50 localhost kernel: ... bit width:              48
Jan 27 21:42:50 localhost kernel: ... generic registers:      6
Jan 27 21:42:50 localhost kernel: ... value mask:             0000ffffffffffff
Jan 27 21:42:50 localhost kernel: ... max period:             00007fffffffffff
Jan 27 21:42:50 localhost kernel: ... fixed-purpose events:   0
Jan 27 21:42:50 localhost kernel: ... event mask:             000000000000003f
Jan 27 21:42:50 localhost kernel: signal: max sigframe size: 1776
Jan 27 21:42:50 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 27 21:42:50 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 27 21:42:50 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 27 21:42:50 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 27 21:42:50 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 27 21:42:50 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 27 21:42:50 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 27 21:42:50 localhost kernel: node 0 deferred pages initialised in 11ms
Jan 27 21:42:50 localhost kernel: Memory: 7763864K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618364K reserved, 0K cma-reserved)
Jan 27 21:42:50 localhost kernel: devtmpfs: initialized
Jan 27 21:42:50 localhost kernel: x86/mm: Memory block size: 128MB
Jan 27 21:42:50 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 27 21:42:50 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 27 21:42:50 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 27 21:42:50 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 27 21:42:50 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 27 21:42:50 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 27 21:42:50 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 27 21:42:50 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 27 21:42:50 localhost kernel: audit: type=2000 audit(1769550168.017:1): state=initialized audit_enabled=0 res=1
Jan 27 21:42:50 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 27 21:42:50 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 27 21:42:50 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 27 21:42:50 localhost kernel: cpuidle: using governor menu
Jan 27 21:42:50 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 27 21:42:50 localhost kernel: PCI: Using configuration type 1 for base access
Jan 27 21:42:50 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 27 21:42:50 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 27 21:42:50 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 27 21:42:50 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 27 21:42:50 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 27 21:42:50 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 27 21:42:50 localhost kernel: Demotion targets for Node 0: null
Jan 27 21:42:50 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 27 21:42:50 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 27 21:42:50 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 27 21:42:50 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 27 21:42:50 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 27 21:42:50 localhost kernel: ACPI: Interpreter enabled
Jan 27 21:42:50 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 27 21:42:50 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 27 21:42:50 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 27 21:42:50 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 27 21:42:50 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 27 21:42:50 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 27 21:42:50 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [3] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [4] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [5] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [6] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [7] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [8] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [9] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [10] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [11] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [12] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [13] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [14] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [15] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [16] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [17] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [18] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [19] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [20] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [21] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [22] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [23] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [24] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [25] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [26] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [27] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [28] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [29] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [30] registered
Jan 27 21:42:50 localhost kernel: acpiphp: Slot [31] registered
Jan 27 21:42:50 localhost kernel: PCI host bridge to bus 0000:00
Jan 27 21:42:50 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 27 21:42:50 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 27 21:42:50 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 27 21:42:50 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 27 21:42:50 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 27 21:42:50 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 27 21:42:50 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 27 21:42:50 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 27 21:42:50 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 27 21:42:50 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 27 21:42:50 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 27 21:42:50 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 27 21:42:50 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 27 21:42:50 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 27 21:42:50 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 27 21:42:50 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 27 21:42:50 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 27 21:42:50 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 27 21:42:50 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 27 21:42:50 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 27 21:42:50 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 27 21:42:50 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 27 21:42:50 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 21:42:50 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 27 21:42:50 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 27 21:42:50 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 27 21:42:50 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 27 21:42:50 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 27 21:42:50 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 27 21:42:50 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 27 21:42:50 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 27 21:42:50 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 27 21:42:50 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 27 21:42:50 localhost kernel: iommu: Default domain type: Translated
Jan 27 21:42:50 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 27 21:42:50 localhost kernel: SCSI subsystem initialized
Jan 27 21:42:50 localhost kernel: ACPI: bus type USB registered
Jan 27 21:42:50 localhost kernel: usbcore: registered new interface driver usbfs
Jan 27 21:42:50 localhost kernel: usbcore: registered new interface driver hub
Jan 27 21:42:50 localhost kernel: usbcore: registered new device driver usb
Jan 27 21:42:50 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 27 21:42:50 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 27 21:42:50 localhost kernel: PTP clock support registered
Jan 27 21:42:50 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 27 21:42:50 localhost kernel: NetLabel: Initializing
Jan 27 21:42:50 localhost kernel: NetLabel:  domain hash size = 128
Jan 27 21:42:50 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 27 21:42:50 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 27 21:42:50 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 27 21:42:50 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 27 21:42:50 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 27 21:42:50 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 27 21:42:50 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 27 21:42:50 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 27 21:42:50 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 27 21:42:50 localhost kernel: vgaarb: loaded
Jan 27 21:42:50 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 27 21:42:50 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 27 21:42:50 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 27 21:42:50 localhost kernel: pnp: PnP ACPI init
Jan 27 21:42:50 localhost kernel: pnp 00:03: [dma 2]
Jan 27 21:42:50 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 27 21:42:50 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 27 21:42:50 localhost kernel: NET: Registered PF_INET protocol family
Jan 27 21:42:50 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 27 21:42:50 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 27 21:42:50 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 27 21:42:50 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 27 21:42:50 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 27 21:42:50 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 27 21:42:50 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 27 21:42:50 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 21:42:50 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 27 21:42:50 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 27 21:42:50 localhost kernel: NET: Registered PF_XDP protocol family
Jan 27 21:42:50 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 27 21:42:50 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 27 21:42:50 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 27 21:42:50 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 27 21:42:50 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 27 21:42:50 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 27 21:42:50 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 27 21:42:50 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72526 usecs
Jan 27 21:42:50 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 27 21:42:50 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 27 21:42:50 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 27 21:42:50 localhost kernel: ACPI: bus type thunderbolt registered
Jan 27 21:42:50 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 27 21:42:50 localhost kernel: Initialise system trusted keyrings
Jan 27 21:42:50 localhost kernel: Key type blacklist registered
Jan 27 21:42:50 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 27 21:42:50 localhost kernel: zbud: loaded
Jan 27 21:42:50 localhost kernel: integrity: Platform Keyring initialized
Jan 27 21:42:50 localhost kernel: integrity: Machine keyring initialized
Jan 27 21:42:50 localhost kernel: Freeing initrd memory: 87956K
Jan 27 21:42:50 localhost kernel: NET: Registered PF_ALG protocol family
Jan 27 21:42:50 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 27 21:42:50 localhost kernel: Key type asymmetric registered
Jan 27 21:42:50 localhost kernel: Asymmetric key parser 'x509' registered
Jan 27 21:42:50 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 27 21:42:50 localhost kernel: io scheduler mq-deadline registered
Jan 27 21:42:50 localhost kernel: io scheduler kyber registered
Jan 27 21:42:50 localhost kernel: io scheduler bfq registered
Jan 27 21:42:50 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 27 21:42:50 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 27 21:42:50 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 27 21:42:50 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 27 21:42:50 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 27 21:42:50 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 27 21:42:50 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 27 21:42:50 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 27 21:42:50 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 27 21:42:50 localhost kernel: Non-volatile memory driver v1.3
Jan 27 21:42:50 localhost kernel: rdac: device handler registered
Jan 27 21:42:50 localhost kernel: hp_sw: device handler registered
Jan 27 21:42:50 localhost kernel: emc: device handler registered
Jan 27 21:42:50 localhost kernel: alua: device handler registered
Jan 27 21:42:50 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 27 21:42:50 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 27 21:42:50 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 27 21:42:50 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 27 21:42:50 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 27 21:42:50 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 27 21:42:50 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 27 21:42:50 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 27 21:42:50 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 27 21:42:50 localhost kernel: hub 1-0:1.0: USB hub found
Jan 27 21:42:50 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 27 21:42:50 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 27 21:42:50 localhost kernel: usbserial: USB Serial support registered for generic
Jan 27 21:42:50 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 27 21:42:50 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 27 21:42:50 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 27 21:42:50 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 27 21:42:50 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 27 21:42:50 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 27 21:42:50 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 27 21:42:50 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 27 21:42:50 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 27 21:42:50 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-27T21:42:49 UTC (1769550169)
Jan 27 21:42:50 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 27 21:42:50 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 27 21:42:50 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 27 21:42:50 localhost kernel: usbcore: registered new interface driver usbhid
Jan 27 21:42:50 localhost kernel: usbhid: USB HID core driver
Jan 27 21:42:50 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 27 21:42:50 localhost kernel: Initializing XFRM netlink socket
Jan 27 21:42:50 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 27 21:42:50 localhost kernel: Segment Routing with IPv6
Jan 27 21:42:50 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 27 21:42:50 localhost kernel: mpls_gso: MPLS GSO support
Jan 27 21:42:50 localhost kernel: IPI shorthand broadcast: enabled
Jan 27 21:42:50 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 27 21:42:50 localhost kernel: AES CTR mode by8 optimization enabled
Jan 27 21:42:50 localhost kernel: sched_clock: Marking stable (1230008961, 144610862)->(1482953589, -108333766)
Jan 27 21:42:50 localhost kernel: registered taskstats version 1
Jan 27 21:42:50 localhost kernel: Loading compiled-in X.509 certificates
Jan 27 21:42:50 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 21:42:50 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 27 21:42:50 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 27 21:42:50 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 27 21:42:50 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 27 21:42:50 localhost kernel: Demotion targets for Node 0: null
Jan 27 21:42:50 localhost kernel: page_owner is disabled
Jan 27 21:42:50 localhost kernel: Key type .fscrypt registered
Jan 27 21:42:50 localhost kernel: Key type fscrypt-provisioning registered
Jan 27 21:42:50 localhost kernel: Key type big_key registered
Jan 27 21:42:50 localhost kernel: Key type encrypted registered
Jan 27 21:42:50 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 27 21:42:50 localhost kernel: Loading compiled-in module X.509 certificates
Jan 27 21:42:50 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 27 21:42:50 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 27 21:42:50 localhost kernel: ima: No architecture policies found
Jan 27 21:42:50 localhost kernel: evm: Initialising EVM extended attributes:
Jan 27 21:42:50 localhost kernel: evm: security.selinux
Jan 27 21:42:50 localhost kernel: evm: security.SMACK64 (disabled)
Jan 27 21:42:50 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 27 21:42:50 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 27 21:42:50 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 27 21:42:50 localhost kernel: evm: security.apparmor (disabled)
Jan 27 21:42:50 localhost kernel: evm: security.ima
Jan 27 21:42:50 localhost kernel: evm: security.capability
Jan 27 21:42:50 localhost kernel: evm: HMAC attrs: 0x1
Jan 27 21:42:50 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 27 21:42:50 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 27 21:42:50 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 27 21:42:50 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 27 21:42:50 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 27 21:42:50 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 27 21:42:50 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 27 21:42:50 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 27 21:42:50 localhost kernel: Running certificate verification RSA selftest
Jan 27 21:42:50 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 27 21:42:50 localhost kernel: Running certificate verification ECDSA selftest
Jan 27 21:42:50 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 27 21:42:50 localhost kernel: clk: Disabling unused clocks
Jan 27 21:42:50 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 27 21:42:50 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 27 21:42:50 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 27 21:42:50 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 27 21:42:50 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 27 21:42:50 localhost kernel: Run /init as init process
Jan 27 21:42:50 localhost kernel:   with arguments:
Jan 27 21:42:50 localhost kernel:     /init
Jan 27 21:42:50 localhost kernel:   with environment:
Jan 27 21:42:50 localhost kernel:     HOME=/
Jan 27 21:42:50 localhost kernel:     TERM=linux
Jan 27 21:42:50 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 27 21:42:50 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 27 21:42:50 localhost systemd[1]: Detected virtualization kvm.
Jan 27 21:42:50 localhost systemd[1]: Detected architecture x86-64.
Jan 27 21:42:50 localhost systemd[1]: Running in initrd.
Jan 27 21:42:50 localhost systemd[1]: No hostname configured, using default hostname.
Jan 27 21:42:50 localhost systemd[1]: Hostname set to <localhost>.
Jan 27 21:42:50 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 27 21:42:50 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 27 21:42:50 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 27 21:42:50 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 27 21:42:50 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 27 21:42:50 localhost systemd[1]: Reached target Local File Systems.
Jan 27 21:42:50 localhost systemd[1]: Reached target Path Units.
Jan 27 21:42:50 localhost systemd[1]: Reached target Slice Units.
Jan 27 21:42:50 localhost systemd[1]: Reached target Swaps.
Jan 27 21:42:50 localhost systemd[1]: Reached target Timer Units.
Jan 27 21:42:50 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 27 21:42:50 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 27 21:42:50 localhost systemd[1]: Listening on Journal Socket.
Jan 27 21:42:50 localhost systemd[1]: Listening on udev Control Socket.
Jan 27 21:42:50 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 27 21:42:50 localhost systemd[1]: Reached target Socket Units.
Jan 27 21:42:50 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 27 21:42:50 localhost systemd[1]: Starting Journal Service...
Jan 27 21:42:50 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 27 21:42:50 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 27 21:42:50 localhost systemd[1]: Starting Create System Users...
Jan 27 21:42:50 localhost systemd[1]: Starting Setup Virtual Console...
Jan 27 21:42:50 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 27 21:42:50 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 27 21:42:50 localhost systemd[1]: Finished Create System Users.
Jan 27 21:42:50 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 27 21:42:50 localhost systemd-journald[306]: Journal started
Jan 27 21:42:50 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/3d6f663013434c09b4591f5514c0a933) is 8.0M, max 153.6M, 145.6M free.
Jan 27 21:42:50 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Jan 27 21:42:50 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Jan 27 21:42:50 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 27 21:42:50 localhost systemd[1]: Started Journal Service.
Jan 27 21:42:50 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 27 21:42:50 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 27 21:42:50 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 27 21:42:50 localhost systemd[1]: Finished Setup Virtual Console.
Jan 27 21:42:50 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 27 21:42:50 localhost systemd[1]: Starting dracut cmdline hook...
Jan 27 21:42:50 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Jan 27 21:42:50 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 27 21:42:50 localhost systemd[1]: Finished dracut cmdline hook.
Jan 27 21:42:50 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 27 21:42:50 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 27 21:42:50 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 27 21:42:50 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 27 21:42:50 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 27 21:42:50 localhost kernel: RPC: Registered udp transport module.
Jan 27 21:42:50 localhost kernel: RPC: Registered tcp transport module.
Jan 27 21:42:50 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 27 21:42:50 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 27 21:42:51 localhost rpc.statd[444]: Version 2.5.4 starting
Jan 27 21:42:51 localhost rpc.statd[444]: Initializing NSM state
Jan 27 21:42:51 localhost rpc.idmapd[449]: Setting log level to 0
Jan 27 21:42:51 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 27 21:42:51 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 27 21:42:51 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Jan 27 21:42:51 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 27 21:42:51 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 27 21:42:51 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 27 21:42:51 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 27 21:42:51 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 27 21:42:51 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 21:42:51 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 27 21:42:51 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 21:42:51 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 21:42:51 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 27 21:42:51 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 21:42:51 localhost systemd[1]: Reached target Network.
Jan 27 21:42:51 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 27 21:42:51 localhost systemd[1]: Starting dracut initqueue hook...
Jan 27 21:42:51 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 27 21:42:51 localhost systemd[1]: Reached target System Initialization.
Jan 27 21:42:51 localhost systemd[1]: Reached target Basic System.
Jan 27 21:42:51 localhost systemd-udevd[465]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 21:42:51 localhost kernel: libata version 3.00 loaded.
Jan 27 21:42:51 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 27 21:42:51 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 27 21:42:51 localhost kernel: scsi host0: ata_piix
Jan 27 21:42:51 localhost kernel: scsi host1: ata_piix
Jan 27 21:42:51 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 27 21:42:51 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 27 21:42:51 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 27 21:42:51 localhost kernel:  vda: vda1
Jan 27 21:42:51 localhost kernel: ata1: found unknown device (class 0)
Jan 27 21:42:51 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 27 21:42:51 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 27 21:42:51 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 27 21:42:51 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 27 21:42:51 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 27 21:42:51 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 21:42:51 localhost systemd[1]: Reached target Initrd Root Device.
Jan 27 21:42:51 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 27 21:42:51 localhost systemd[1]: Finished dracut initqueue hook.
Jan 27 21:42:51 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 27 21:42:51 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 27 21:42:51 localhost systemd[1]: Reached target Remote File Systems.
Jan 27 21:42:51 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 27 21:42:51 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 27 21:42:51 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 27 21:42:51 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Jan 27 21:42:51 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 27 21:42:51 localhost systemd[1]: Mounting /sysroot...
Jan 27 21:42:52 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 27 21:42:52 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 27 21:42:52 localhost kernel: XFS (vda1): Ending clean mount
Jan 27 21:42:52 localhost systemd[1]: Mounted /sysroot.
Jan 27 21:42:52 localhost systemd[1]: Reached target Initrd Root File System.
Jan 27 21:42:52 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 27 21:42:52 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 27 21:42:52 localhost systemd[1]: Reached target Initrd File Systems.
Jan 27 21:42:52 localhost systemd[1]: Reached target Initrd Default Target.
Jan 27 21:42:52 localhost systemd[1]: Starting dracut mount hook...
Jan 27 21:42:52 localhost systemd[1]: Finished dracut mount hook.
Jan 27 21:42:52 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 27 21:42:52 localhost rpc.idmapd[449]: exiting on signal 15
Jan 27 21:42:52 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 27 21:42:52 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 27 21:42:52 localhost systemd[1]: Stopped target Network.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Timer Units.
Jan 27 21:42:52 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 27 21:42:52 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Basic System.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Path Units.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Remote File Systems.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Slice Units.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Socket Units.
Jan 27 21:42:52 localhost systemd[1]: Stopped target System Initialization.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Local File Systems.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Swaps.
Jan 27 21:42:52 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped dracut mount hook.
Jan 27 21:42:52 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 27 21:42:52 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 27 21:42:52 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 27 21:42:52 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 27 21:42:52 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 27 21:42:52 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 27 21:42:52 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 27 21:42:52 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 27 21:42:52 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 27 21:42:52 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 27 21:42:52 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 27 21:42:52 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 27 21:42:52 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Closed udev Control Socket.
Jan 27 21:42:52 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Closed udev Kernel Socket.
Jan 27 21:42:52 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 27 21:42:52 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 27 21:42:52 localhost systemd[1]: Starting Cleanup udev Database...
Jan 27 21:42:52 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 27 21:42:52 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 27 21:42:52 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Stopped Create System Users.
Jan 27 21:42:52 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 27 21:42:52 localhost systemd[1]: Finished Cleanup udev Database.
Jan 27 21:42:52 localhost systemd[1]: Reached target Switch Root.
Jan 27 21:42:52 localhost systemd[1]: Starting Switch Root...
Jan 27 21:42:52 localhost systemd[1]: Switching root.
Jan 27 21:42:52 localhost systemd-journald[306]: Journal stopped
Jan 27 21:42:53 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Jan 27 21:42:53 localhost kernel: audit: type=1404 audit(1769550173.103:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 27 21:42:53 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 21:42:53 localhost kernel: SELinux:  policy capability open_perms=1
Jan 27 21:42:53 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 21:42:53 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 27 21:42:53 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 21:42:53 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 21:42:53 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 21:42:53 localhost kernel: audit: type=1403 audit(1769550173.265:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 27 21:42:53 localhost systemd[1]: Successfully loaded SELinux policy in 165.309ms.
Jan 27 21:42:53 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.498ms.
Jan 27 21:42:53 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 27 21:42:53 localhost systemd[1]: Detected virtualization kvm.
Jan 27 21:42:53 localhost systemd[1]: Detected architecture x86-64.
Jan 27 21:42:53 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 21:42:53 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 27 21:42:53 localhost systemd[1]: Stopped Switch Root.
Jan 27 21:42:53 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 27 21:42:53 localhost systemd[1]: Created slice Slice /system/getty.
Jan 27 21:42:53 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 27 21:42:53 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 27 21:42:53 localhost systemd[1]: Created slice User and Session Slice.
Jan 27 21:42:53 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 27 21:42:53 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 27 21:42:53 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 27 21:42:53 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 27 21:42:53 localhost systemd[1]: Stopped target Switch Root.
Jan 27 21:42:53 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 27 21:42:53 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 27 21:42:53 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 27 21:42:53 localhost systemd[1]: Reached target Path Units.
Jan 27 21:42:53 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 27 21:42:53 localhost systemd[1]: Reached target Slice Units.
Jan 27 21:42:53 localhost systemd[1]: Reached target Swaps.
Jan 27 21:42:53 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 27 21:42:53 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 27 21:42:53 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 27 21:42:53 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 27 21:42:53 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 27 21:42:53 localhost systemd[1]: Listening on udev Control Socket.
Jan 27 21:42:53 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 27 21:42:53 localhost systemd[1]: Mounting Huge Pages File System...
Jan 27 21:42:53 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 27 21:42:53 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 27 21:42:53 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 27 21:42:53 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 27 21:42:53 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 27 21:42:53 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 21:42:53 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 27 21:42:53 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 27 21:42:53 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 27 21:42:53 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 27 21:42:53 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 27 21:42:53 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 27 21:42:53 localhost systemd[1]: Stopped Journal Service.
Jan 27 21:42:53 localhost systemd[1]: Starting Journal Service...
Jan 27 21:42:53 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 27 21:42:53 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 27 21:42:53 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 21:42:53 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 27 21:42:53 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 27 21:42:53 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 27 21:42:53 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 27 21:42:53 localhost kernel: fuse: init (API version 7.37)
Jan 27 21:42:53 localhost systemd[1]: Mounted Huge Pages File System.
Jan 27 21:42:53 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 27 21:42:53 localhost systemd-journald[678]: Journal started
Jan 27 21:42:53 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 21:42:53 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 27 21:42:53 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 27 21:42:53 localhost systemd[1]: Started Journal Service.
Jan 27 21:42:53 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 27 21:42:53 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 27 21:42:53 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 27 21:42:53 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 27 21:42:53 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 21:42:53 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 21:42:53 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 27 21:42:53 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 27 21:42:53 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 27 21:42:53 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 27 21:42:53 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 27 21:42:53 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 27 21:42:53 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 27 21:42:53 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 27 21:42:53 localhost kernel: ACPI: bus type drm_connector registered
Jan 27 21:42:53 localhost systemd[1]: Mounting FUSE Control File System...
Jan 27 21:42:53 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 27 21:42:53 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 27 21:42:53 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 27 21:42:54 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 27 21:42:54 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 27 21:42:54 localhost systemd[1]: Starting Create System Users...
Jan 27 21:42:54 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 27 21:42:54 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 27 21:42:54 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 27 21:42:54 localhost systemd-journald[678]: Received client request to flush runtime journal.
Jan 27 21:42:54 localhost systemd[1]: Mounted FUSE Control File System.
Jan 27 21:42:54 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 27 21:42:54 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 27 21:42:54 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 27 21:42:54 localhost systemd[1]: Finished Create System Users.
Jan 27 21:42:54 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 27 21:42:54 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 27 21:42:54 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 27 21:42:54 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 27 21:42:54 localhost systemd[1]: Reached target Local File Systems.
Jan 27 21:42:54 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 27 21:42:54 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 27 21:42:54 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 27 21:42:54 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 27 21:42:54 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 27 21:42:54 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 27 21:42:54 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 27 21:42:54 localhost bootctl[694]: Couldn't find EFI system partition, skipping.
Jan 27 21:42:54 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 27 21:42:54 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 27 21:42:54 localhost systemd[1]: Starting Security Auditing Service...
Jan 27 21:42:54 localhost systemd[1]: Starting RPC Bind...
Jan 27 21:42:54 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 27 21:42:54 localhost auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 27 21:42:54 localhost auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 27 21:42:54 localhost systemd[1]: Started RPC Bind.
Jan 27 21:42:54 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 27 21:42:54 localhost augenrules[705]: /sbin/augenrules: No change
Jan 27 21:42:54 localhost augenrules[720]: No rules
Jan 27 21:42:54 localhost augenrules[720]: enabled 1
Jan 27 21:42:54 localhost augenrules[720]: failure 1
Jan 27 21:42:54 localhost augenrules[720]: pid 700
Jan 27 21:42:54 localhost augenrules[720]: rate_limit 0
Jan 27 21:42:54 localhost augenrules[720]: backlog_limit 8192
Jan 27 21:42:54 localhost augenrules[720]: lost 0
Jan 27 21:42:54 localhost augenrules[720]: backlog 0
Jan 27 21:42:54 localhost augenrules[720]: backlog_wait_time 60000
Jan 27 21:42:54 localhost augenrules[720]: backlog_wait_time_actual 0
Jan 27 21:42:54 localhost augenrules[720]: enabled 1
Jan 27 21:42:54 localhost augenrules[720]: failure 1
Jan 27 21:42:54 localhost augenrules[720]: pid 700
Jan 27 21:42:54 localhost augenrules[720]: rate_limit 0
Jan 27 21:42:54 localhost augenrules[720]: backlog_limit 8192
Jan 27 21:42:54 localhost augenrules[720]: lost 0
Jan 27 21:42:54 localhost augenrules[720]: backlog 0
Jan 27 21:42:54 localhost augenrules[720]: backlog_wait_time 60000
Jan 27 21:42:54 localhost augenrules[720]: backlog_wait_time_actual 0
Jan 27 21:42:54 localhost augenrules[720]: enabled 1
Jan 27 21:42:54 localhost augenrules[720]: failure 1
Jan 27 21:42:54 localhost augenrules[720]: pid 700
Jan 27 21:42:54 localhost augenrules[720]: rate_limit 0
Jan 27 21:42:54 localhost augenrules[720]: backlog_limit 8192
Jan 27 21:42:54 localhost augenrules[720]: lost 0
Jan 27 21:42:54 localhost augenrules[720]: backlog 2
Jan 27 21:42:54 localhost augenrules[720]: backlog_wait_time 60000
Jan 27 21:42:54 localhost augenrules[720]: backlog_wait_time_actual 0
Jan 27 21:42:54 localhost systemd[1]: Started Security Auditing Service.
Jan 27 21:42:54 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 27 21:42:54 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 27 21:42:54 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 27 21:42:54 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 27 21:42:54 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 27 21:42:54 localhost systemd[1]: Starting Update is Completed...
Jan 27 21:42:54 localhost systemd[1]: Finished Update is Completed.
Jan 27 21:42:54 localhost systemd-udevd[728]: Using default interface naming scheme 'rhel-9.0'.
Jan 27 21:42:54 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 27 21:42:54 localhost systemd[1]: Reached target System Initialization.
Jan 27 21:42:54 localhost systemd[1]: Started dnf makecache --timer.
Jan 27 21:42:54 localhost systemd[1]: Started Daily rotation of log files.
Jan 27 21:42:54 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 27 21:42:54 localhost systemd[1]: Reached target Timer Units.
Jan 27 21:42:54 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 27 21:42:54 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 27 21:42:54 localhost systemd[1]: Reached target Socket Units.
Jan 27 21:42:54 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 27 21:42:54 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 21:42:54 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 27 21:42:54 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 27 21:42:54 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 27 21:42:54 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 27 21:42:54 localhost systemd-udevd[731]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 21:42:54 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 27 21:42:54 localhost systemd[1]: Reached target Basic System.
Jan 27 21:42:54 localhost dbus-broker-lau[744]: Ready
Jan 27 21:42:54 localhost systemd[1]: Starting NTP client/server...
Jan 27 21:42:54 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 27 21:42:54 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 27 21:42:54 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 27 21:42:54 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 27 21:42:54 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 27 21:42:54 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 27 21:42:54 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 27 21:42:54 localhost systemd[1]: Started irqbalance daemon.
Jan 27 21:42:54 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 27 21:42:54 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 21:42:54 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 21:42:54 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 21:42:54 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 27 21:42:54 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 27 21:42:54 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 27 21:42:54 localhost chronyd[792]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 27 21:42:54 localhost chronyd[792]: Loaded 0 symmetric keys
Jan 27 21:42:54 localhost chronyd[792]: Using right/UTC timezone to obtain leap second data
Jan 27 21:42:54 localhost chronyd[792]: Loaded seccomp filter (level 2)
Jan 27 21:42:54 localhost systemd[1]: Starting User Login Management...
Jan 27 21:42:54 localhost systemd[1]: Started NTP client/server.
Jan 27 21:42:54 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 27 21:42:55 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 27 21:42:55 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 27 21:42:55 localhost systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 27 21:42:55 localhost systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 27 21:42:55 localhost systemd-logind[789]: New seat seat0.
Jan 27 21:42:55 localhost systemd[1]: Started User Login Management.
Jan 27 21:42:55 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 27 21:42:55 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 27 21:42:55 localhost kernel: Console: switching to colour dummy device 80x25
Jan 27 21:42:55 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 27 21:42:55 localhost kernel: [drm] features: -context_init
Jan 27 21:42:55 localhost kernel: [drm] number of scanouts: 1
Jan 27 21:42:55 localhost kernel: [drm] number of cap sets: 0
Jan 27 21:42:55 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 27 21:42:55 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 27 21:42:55 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 27 21:42:55 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 27 21:42:55 localhost kernel: kvm_amd: TSC scaling supported
Jan 27 21:42:55 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 27 21:42:55 localhost kernel: kvm_amd: Nested Paging enabled
Jan 27 21:42:55 localhost kernel: kvm_amd: LBR virtualization supported
Jan 27 21:42:55 localhost iptables.init[775]: iptables: Applying firewall rules: [  OK  ]
Jan 27 21:42:55 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 27 21:42:55 localhost cloud-init[836]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 27 Jan 2026 21:42:55 +0000. Up 7.15 seconds.
Jan 27 21:42:55 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 27 21:42:55 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 27 21:42:55 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp16fi5oa4.mount: Deactivated successfully.
Jan 27 21:42:55 localhost systemd[1]: Starting Hostname Service...
Jan 27 21:42:55 localhost systemd[1]: Started Hostname Service.
Jan 27 21:42:55 np0005598180.novalocal systemd-hostnamed[850]: Hostname set to <np0005598180.novalocal> (static)
Jan 27 21:42:55 np0005598180.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Reached target Preparation for Network.
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Starting Network Manager...
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.0908] NetworkManager (version 1.54.3-2.el9) is starting... (boot:b296a529-9762-4dd6-b2a2-416e3ccb95c7)
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.0914] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1085] manager[0x5577ba6c9000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1150] hostname: hostname: using hostnamed
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1151] hostname: static hostname changed from (none) to "np0005598180.novalocal"
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1158] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1339] manager[0x5577ba6c9000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1339] manager[0x5577ba6c9000]: rfkill: WWAN hardware radio set enabled
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1448] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1450] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1451] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1451] manager: Networking is enabled by state file
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1455] settings: Loaded settings plugin: keyfile (internal)
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1497] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1540] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1560] dhcp: init: Using DHCP client 'internal'
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1564] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1585] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1602] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1618] device (lo): Activation: starting connection 'lo' (19c0906f-7bf5-4e0a-9fd9-d1d6accc761b)
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1636] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1641] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Started Network Manager.
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1688] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1695] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1699] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1702] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1706] device (eth0): carrier: link connected
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1711] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Reached target Network.
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1721] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1735] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1745] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1747] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1752] manager: NetworkManager state is now CONNECTING
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1754] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1768] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1773] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1855] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1860] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.1872] device (lo): Activation: successful, device activated.
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Reached target NFS client services.
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Reached target Remote File Systems.
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.8879] dhcp4 (eth0): state changed new lease, address=38.102.83.82
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.8894] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.8928] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.8968] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.8971] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.8979] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.8984] device (eth0): Activation: successful, device activated.
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.8992] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 21:42:56 np0005598180.novalocal NetworkManager[854]: <info>  [1769550176.8998] manager: startup complete
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 27 21:42:56 np0005598180.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 27 Jan 2026 21:42:57 +0000. Up 8.83 seconds.
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: |  eth0  | True |         38.102.83.82         | 255.255.255.0 | global | fa:16:3e:40:c5:f4 |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fe40:c5f4/64 |       .       |  link  | fa:16:3e:40:c5:f4 |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 27 21:42:57 np0005598180.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 27 21:42:58 np0005598180.novalocal useradd[985]: new group: name=cloud-user, GID=1001
Jan 27 21:42:58 np0005598180.novalocal useradd[985]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 27 21:42:58 np0005598180.novalocal useradd[985]: add 'cloud-user' to group 'adm'
Jan 27 21:42:58 np0005598180.novalocal useradd[985]: add 'cloud-user' to group 'systemd-journal'
Jan 27 21:42:58 np0005598180.novalocal useradd[985]: add 'cloud-user' to shadow group 'adm'
Jan 27 21:42:58 np0005598180.novalocal useradd[985]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: Generating public/private rsa key pair.
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: The key fingerprint is:
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: SHA256:vS41FhIuPWsG4TQIL3wARv8Nn0hGr2b3T+75AcdG/dE root@np0005598180.novalocal
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: The key's randomart image is:
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: +---[RSA 3072]----+
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |o+o. o           |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |...oo = .     . .|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |  o.o* * .   . oE|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |   o+ X *.. o   o|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |     * BS+.o +  .|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |    o . = +.+    |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |       o +.o .   |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |        ..+ . .  |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |         .o=..   |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: Generating public/private ecdsa key pair.
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: The key fingerprint is:
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: SHA256:fcdHHqXVh7l7oWYhNpcnPTG51ojJvlxDiRvCFsxdHaI root@np0005598180.novalocal
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: The key's randomart image is:
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: +---[ECDSA 256]---+
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |         o . o.=*|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |          + o +*=|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |         . E ++=B|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |         .++**=Xo|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |        S.oo=+O.=|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |           .o=+..|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |           .oo o |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |            o    |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |                 |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: Generating public/private ed25519 key pair.
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: The key fingerprint is:
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: SHA256:U+GnqzTnqW5VeKUPDiq8qFcXadEH58L4s1N9qRTQ+/I root@np0005598180.novalocal
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: The key's randomart image is:
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: +--[ED25519 256]--+
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |         .ooo    |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |        .+.+o..  |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |        .o=oo+.  |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |        +oo+=o. .|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |     . .So+=.+o..|
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |      + o..=o.oo |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |     o +o.=  .o  |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |    o ...= o   E |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: |  .o   o+.o      |
Jan 27 21:42:58 np0005598180.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 27 21:42:58 np0005598180.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 27 21:42:58 np0005598180.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Reached target Network is Online.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Starting System Logging Service...
Jan 27 21:42:59 np0005598180.novalocal sm-notify[1002]: Version 2.5.4 starting
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Starting Permit User Sessions...
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 27 21:42:59 np0005598180.novalocal sshd[1004]: Server listening on 0.0.0.0 port 22.
Jan 27 21:42:59 np0005598180.novalocal sshd[1004]: Server listening on :: port 22.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Finished Permit User Sessions.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Started Command Scheduler.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Started Getty on tty1.
Jan 27 21:42:59 np0005598180.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Jan 27 21:42:59 np0005598180.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 27 21:42:59 np0005598180.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 72% if used.)
Jan 27 21:42:59 np0005598180.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Jan 27 21:42:59 np0005598180.novalocal rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Jan 27 21:42:59 np0005598180.novalocal rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Reached target Login Prompts.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Started System Logging Service.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Reached target Multi-User System.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 27 21:42:59 np0005598180.novalocal rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 21:42:59 np0005598180.novalocal kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Jan 27 21:42:59 np0005598180.novalocal kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 27 21:42:59 np0005598180.novalocal cloud-init[1076]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 27 Jan 2026 21:42:59 +0000. Up 10.94 seconds.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 27 21:42:59 np0005598180.novalocal cloud-init[1253]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 27 Jan 2026 21:42:59 +0000. Up 11.34 seconds.
Jan 27 21:42:59 np0005598180.novalocal dracut[1270]: dracut-057-102.git20250818.el9
Jan 27 21:42:59 np0005598180.novalocal cloud-init[1274]: #############################################################
Jan 27 21:42:59 np0005598180.novalocal cloud-init[1277]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 27 21:42:59 np0005598180.novalocal cloud-init[1290]: 256 SHA256:fcdHHqXVh7l7oWYhNpcnPTG51ojJvlxDiRvCFsxdHaI root@np0005598180.novalocal (ECDSA)
Jan 27 21:42:59 np0005598180.novalocal cloud-init[1292]: 256 SHA256:U+GnqzTnqW5VeKUPDiq8qFcXadEH58L4s1N9qRTQ+/I root@np0005598180.novalocal (ED25519)
Jan 27 21:42:59 np0005598180.novalocal cloud-init[1294]: 3072 SHA256:vS41FhIuPWsG4TQIL3wARv8Nn0hGr2b3T+75AcdG/dE root@np0005598180.novalocal (RSA)
Jan 27 21:42:59 np0005598180.novalocal cloud-init[1295]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 27 21:42:59 np0005598180.novalocal cloud-init[1296]: #############################################################
Jan 27 21:42:59 np0005598180.novalocal cloud-init[1253]: Cloud-init v. 24.4-8.el9 finished at Tue, 27 Jan 2026 21:42:59 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.53 seconds
Jan 27 21:42:59 np0005598180.novalocal sshd-session[1139]: Invalid user sol from 193.32.162.146 port 40668
Jan 27 21:42:59 np0005598180.novalocal dracut[1273]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 27 21:42:59 np0005598180.novalocal systemd[1]: Reached target Cloud-init target.
Jan 27 21:43:00 np0005598180.novalocal sshd-session[1139]: Connection closed by invalid user sol 193.32.162.146 port 40668 [preauth]
Jan 27 21:43:00 np0005598180.novalocal sshd-session[1372]: Unable to negotiate with 38.102.83.114 port 39340: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 27 21:43:00 np0005598180.novalocal sshd-session[1382]: Unable to negotiate with 38.102.83.114 port 39362: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 27 21:43:00 np0005598180.novalocal sshd-session[1387]: Unable to negotiate with 38.102.83.114 port 39372: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 27 21:43:00 np0005598180.novalocal sshd-session[1367]: Connection closed by 38.102.83.114 port 39330 [preauth]
Jan 27 21:43:00 np0005598180.novalocal sshd-session[1392]: Connection reset by 38.102.83.114 port 39384 [preauth]
Jan 27 21:43:00 np0005598180.novalocal sshd-session[1397]: Connection closed by 38.102.83.114 port 39392 [preauth]
Jan 27 21:43:00 np0005598180.novalocal sshd-session[1377]: Connection closed by 38.102.83.114 port 39348 [preauth]
Jan 27 21:43:00 np0005598180.novalocal sshd-session[1402]: Unable to negotiate with 38.102.83.114 port 39402: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 27 21:43:00 np0005598180.novalocal sshd-session[1407]: Unable to negotiate with 38.102.83.114 port 39412: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 27 21:43:00 np0005598180.novalocal dracut[1273]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: memstrack is not available
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: memstrack is not available
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: *** Including module: systemd ***
Jan 27 21:43:01 np0005598180.novalocal chronyd[792]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Jan 27 21:43:01 np0005598180.novalocal chronyd[792]: System clock TAI offset set to 37 seconds
Jan 27 21:43:01 np0005598180.novalocal dracut[1273]: *** Including module: fips ***
Jan 27 21:43:02 np0005598180.novalocal dracut[1273]: *** Including module: systemd-initrd ***
Jan 27 21:43:02 np0005598180.novalocal dracut[1273]: *** Including module: i18n ***
Jan 27 21:43:02 np0005598180.novalocal dracut[1273]: *** Including module: drm ***
Jan 27 21:43:02 np0005598180.novalocal dracut[1273]: *** Including module: prefixdevname ***
Jan 27 21:43:02 np0005598180.novalocal dracut[1273]: *** Including module: kernel-modules ***
Jan 27 21:43:02 np0005598180.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 27 21:43:03 np0005598180.novalocal dracut[1273]: *** Including module: kernel-modules-extra ***
Jan 27 21:43:03 np0005598180.novalocal dracut[1273]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 27 21:43:03 np0005598180.novalocal dracut[1273]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 27 21:43:03 np0005598180.novalocal dracut[1273]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 27 21:43:03 np0005598180.novalocal dracut[1273]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 27 21:43:03 np0005598180.novalocal dracut[1273]: *** Including module: qemu ***
Jan 27 21:43:03 np0005598180.novalocal dracut[1273]: *** Including module: fstab-sys ***
Jan 27 21:43:03 np0005598180.novalocal dracut[1273]: *** Including module: rootfs-block ***
Jan 27 21:43:03 np0005598180.novalocal dracut[1273]: *** Including module: terminfo ***
Jan 27 21:43:03 np0005598180.novalocal dracut[1273]: *** Including module: udev-rules ***
Jan 27 21:43:04 np0005598180.novalocal dracut[1273]: Skipping udev rule: 91-permissions.rules
Jan 27 21:43:04 np0005598180.novalocal dracut[1273]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 27 21:43:04 np0005598180.novalocal dracut[1273]: *** Including module: virtiofs ***
Jan 27 21:43:04 np0005598180.novalocal dracut[1273]: *** Including module: dracut-systemd ***
Jan 27 21:43:04 np0005598180.novalocal dracut[1273]: *** Including module: usrmount ***
Jan 27 21:43:04 np0005598180.novalocal dracut[1273]: *** Including module: base ***
Jan 27 21:43:04 np0005598180.novalocal dracut[1273]: *** Including module: fs-lib ***
Jan 27 21:43:04 np0005598180.novalocal dracut[1273]: *** Including module: kdumpbase ***
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:   microcode_ctl module: mangling fw_dir
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: configuration "intel" is ignored
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: IRQ 35 affinity is now unmanaged
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: IRQ 33 affinity is now unmanaged
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: IRQ 31 affinity is now unmanaged
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: IRQ 28 affinity is now unmanaged
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: IRQ 34 affinity is now unmanaged
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: IRQ 32 affinity is now unmanaged
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: IRQ 30 affinity is now unmanaged
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 27 21:43:05 np0005598180.novalocal irqbalance[782]: IRQ 29 affinity is now unmanaged
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]: *** Including module: openssl ***
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]: *** Including module: shutdown ***
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]: *** Including module: squash ***
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]: *** Including modules done ***
Jan 27 21:43:05 np0005598180.novalocal dracut[1273]: *** Installing kernel module dependencies ***
Jan 27 21:43:06 np0005598180.novalocal dracut[1273]: *** Installing kernel module dependencies done ***
Jan 27 21:43:06 np0005598180.novalocal dracut[1273]: *** Resolving executable dependencies ***
Jan 27 21:43:06 np0005598180.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 21:43:08 np0005598180.novalocal dracut[1273]: *** Resolving executable dependencies done ***
Jan 27 21:43:08 np0005598180.novalocal dracut[1273]: *** Generating early-microcode cpio image ***
Jan 27 21:43:08 np0005598180.novalocal dracut[1273]: *** Store current command line parameters ***
Jan 27 21:43:08 np0005598180.novalocal dracut[1273]: Stored kernel commandline:
Jan 27 21:43:08 np0005598180.novalocal dracut[1273]: No dracut internal kernel commandline stored in the initramfs
Jan 27 21:43:09 np0005598180.novalocal dracut[1273]: *** Install squash loader ***
Jan 27 21:43:10 np0005598180.novalocal dracut[1273]: *** Squashing the files inside the initramfs ***
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: *** Squashing the files inside the initramfs done ***
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: *** Hardlinking files ***
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: Mode:           real
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: Files:          50
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: Linked:         0 files
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: Compared:       0 xattrs
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: Compared:       0 files
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: Saved:          0 B
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: Duration:       0.000875 seconds
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: *** Hardlinking files done ***
Jan 27 21:43:11 np0005598180.novalocal dracut[1273]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 27 21:43:12 np0005598180.novalocal kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Jan 27 21:43:12 np0005598180.novalocal kdumpctl[1018]: kdump: Starting kdump: [OK]
Jan 27 21:43:12 np0005598180.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 27 21:43:12 np0005598180.novalocal systemd[1]: Startup finished in 1.690s (kernel) + 3.049s (initrd) + 19.129s (userspace) = 23.870s.
Jan 27 21:43:26 np0005598180.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 21:43:44 np0005598180.novalocal sshd-session[4305]: Accepted publickey for zuul from 38.102.83.114 port 57408 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 27 21:43:44 np0005598180.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 27 21:43:44 np0005598180.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 27 21:43:44 np0005598180.novalocal systemd-logind[789]: New session 1 of user zuul.
Jan 27 21:43:44 np0005598180.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 27 21:43:44 np0005598180.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Queued start job for default target Main User Target.
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Created slice User Application Slice.
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Reached target Paths.
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Reached target Timers.
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Starting D-Bus User Message Bus Socket...
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Starting Create User's Volatile Files and Directories...
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Finished Create User's Volatile Files and Directories.
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Listening on D-Bus User Message Bus Socket.
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Reached target Sockets.
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Reached target Basic System.
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Reached target Main User Target.
Jan 27 21:43:44 np0005598180.novalocal systemd[4309]: Startup finished in 161ms.
Jan 27 21:43:44 np0005598180.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 27 21:43:44 np0005598180.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 27 21:43:44 np0005598180.novalocal sshd-session[4305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 21:43:45 np0005598180.novalocal python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 21:43:47 np0005598180.novalocal python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 21:43:54 np0005598180.novalocal python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 21:43:55 np0005598180.novalocal python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 27 21:43:57 np0005598180.novalocal python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOjbE4/UZqQN35AxC3vVhkIMRshuh0qjxjLF1Ef4ve9lU24hFDRE0HFZhbV4vLo8D/j3z6hEAKnZarVDb5vCJpbPlQBR//gpwbjThAi9YwHMgrHTygLh3EwSB7szNWEYFRVmGfI7ovIn9AtSGBLQWOQRlqzQHhMnXcPDGgZy2Pv935qhwC/cYVgTilTI2Xopx2oZfVIUCfbX87/0jGkg8RsaCTKrf7uggYIdFV6y5DoZ9nYq9WL8drT8ecdc8cyedJJPM318Mh1D/YH/zTKtk8IAXASfL6gUaFRrHxh23DzJ+SNo1eTO0yyr0YgoDnJf0WJcMiipoPuCjG0GSoume3aqV2aOwx6t3iU+UYCelXziQsm9yVcqXZ7pQe9pbnizwgOQUvYMOI4cUMPUS73pt5aNyw298uTfC6g87YovnLBjklXzO3wnR4162hRNi6tCxWVQlPPUXPHJIQu+PngkxlR6kUVjl/zqTGb8w+nE0CeoZZAVBWh78Y/AOmZkt+Sck= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:43:57 np0005598180.novalocal python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:43:58 np0005598180.novalocal python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 21:43:58 np0005598180.novalocal python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769550237.972835-207-76455203014568/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ff45eb8eee054a5c93a7f3b7eab70c5c_id_rsa follow=False checksum=b6c356729e7c569fac2ef91cd4f9974bd7a33d17 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:43:59 np0005598180.novalocal python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 21:43:59 np0005598180.novalocal python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769550238.939545-240-68314098919736/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ff45eb8eee054a5c93a7f3b7eab70c5c_id_rsa.pub follow=False checksum=f417592b98e19b878dc81e8331c034c0a91dcbfe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:01 np0005598180.novalocal python3[4979]: ansible-ping Invoked with data=pong
Jan 27 21:44:01 np0005598180.novalocal python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 21:44:04 np0005598180.novalocal python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 27 21:44:05 np0005598180.novalocal python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:05 np0005598180.novalocal python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:05 np0005598180.novalocal python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:06 np0005598180.novalocal python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:06 np0005598180.novalocal python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:07 np0005598180.novalocal python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:08 np0005598180.novalocal sudo[5237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqwgevutxccrybxivhycfiuwdooyjpob ; /usr/bin/python3'
Jan 27 21:44:08 np0005598180.novalocal sudo[5237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:44:08 np0005598180.novalocal python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:08 np0005598180.novalocal sudo[5237]: pam_unix(sudo:session): session closed for user root
Jan 27 21:44:09 np0005598180.novalocal sudo[5315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avoofalhnobwrlyhweuuxcpilgnjjgca ; /usr/bin/python3'
Jan 27 21:44:09 np0005598180.novalocal sudo[5315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:44:09 np0005598180.novalocal python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 21:44:09 np0005598180.novalocal sudo[5315]: pam_unix(sudo:session): session closed for user root
Jan 27 21:44:09 np0005598180.novalocal sudo[5388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffpjqdeobhmmpkkvfzaensrflncsiajp ; /usr/bin/python3'
Jan 27 21:44:09 np0005598180.novalocal sudo[5388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:44:09 np0005598180.novalocal python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769550248.8540547-21-259345666090470/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:09 np0005598180.novalocal sudo[5388]: pam_unix(sudo:session): session closed for user root
Jan 27 21:44:10 np0005598180.novalocal python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:10 np0005598180.novalocal python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:11 np0005598180.novalocal python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:11 np0005598180.novalocal python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:11 np0005598180.novalocal python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:11 np0005598180.novalocal python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:12 np0005598180.novalocal python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:12 np0005598180.novalocal python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:12 np0005598180.novalocal python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:13 np0005598180.novalocal python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:13 np0005598180.novalocal python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:13 np0005598180.novalocal python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:14 np0005598180.novalocal python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:14 np0005598180.novalocal python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:14 np0005598180.novalocal python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:14 np0005598180.novalocal python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:15 np0005598180.novalocal python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:15 np0005598180.novalocal python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:15 np0005598180.novalocal python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:16 np0005598180.novalocal python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:16 np0005598180.novalocal python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:16 np0005598180.novalocal python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:16 np0005598180.novalocal python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:17 np0005598180.novalocal python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:17 np0005598180.novalocal python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:17 np0005598180.novalocal python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:44:20 np0005598180.novalocal sudo[6062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzcwvmihoanbugxfimeepdkpvjwexrxf ; /usr/bin/python3'
Jan 27 21:44:20 np0005598180.novalocal sudo[6062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:44:20 np0005598180.novalocal python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 21:44:20 np0005598180.novalocal systemd[1]: Starting Time & Date Service...
Jan 27 21:44:20 np0005598180.novalocal systemd[1]: Started Time & Date Service.
Jan 27 21:44:20 np0005598180.novalocal systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 27 21:44:20 np0005598180.novalocal sudo[6062]: pam_unix(sudo:session): session closed for user root
Jan 27 21:44:20 np0005598180.novalocal sudo[6093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqhgwaqkvmifqowkgsuaitijibmyqozk ; /usr/bin/python3'
Jan 27 21:44:20 np0005598180.novalocal sudo[6093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:44:21 np0005598180.novalocal python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:21 np0005598180.novalocal sudo[6093]: pam_unix(sudo:session): session closed for user root
Jan 27 21:44:21 np0005598180.novalocal python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 21:44:22 np0005598180.novalocal python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769550261.3399544-153-38086023382057/source _original_basename=tmpg4cjxn46 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:22 np0005598180.novalocal python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 21:44:23 np0005598180.novalocal python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769550262.3326695-183-726434776135/source _original_basename=tmpvorvwn3w follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:23 np0005598180.novalocal sudo[6513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puiyswtjdsaafqkreuuplnpfrurywydx ; /usr/bin/python3'
Jan 27 21:44:23 np0005598180.novalocal sudo[6513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:44:23 np0005598180.novalocal python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 21:44:23 np0005598180.novalocal sudo[6513]: pam_unix(sudo:session): session closed for user root
Jan 27 21:44:24 np0005598180.novalocal sudo[6586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzbgfpbyaqfjosotwovuiepnbarsimzj ; /usr/bin/python3'
Jan 27 21:44:24 np0005598180.novalocal sudo[6586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:44:24 np0005598180.novalocal python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769550263.4878523-231-182460534793026/source _original_basename=tmprsc6e4h7 follow=False checksum=d300ef2a9a28a235d7b76ee497641bd17d004fed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:24 np0005598180.novalocal sudo[6586]: pam_unix(sudo:session): session closed for user root
Jan 27 21:44:24 np0005598180.novalocal python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:44:25 np0005598180.novalocal python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:44:25 np0005598180.novalocal sudo[6740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eutrxlnzyirwidcnpicnizlgzkugkacg ; /usr/bin/python3'
Jan 27 21:44:25 np0005598180.novalocal sudo[6740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:44:25 np0005598180.novalocal python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 21:44:25 np0005598180.novalocal sudo[6740]: pam_unix(sudo:session): session closed for user root
Jan 27 21:44:25 np0005598180.novalocal sudo[6813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akmfwjocbndmhzmbrkcatfrbymznmqsa ; /usr/bin/python3'
Jan 27 21:44:25 np0005598180.novalocal sudo[6813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:44:26 np0005598180.novalocal python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769550265.3438292-273-113609097987269/source _original_basename=tmpz92a2izo follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:26 np0005598180.novalocal sudo[6813]: pam_unix(sudo:session): session closed for user root
Jan 27 21:44:26 np0005598180.novalocal sudo[6864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvsnlathhntalzanwybtmohtnihnakjm ; /usr/bin/python3'
Jan 27 21:44:26 np0005598180.novalocal sudo[6864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:44:26 np0005598180.novalocal python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-af3b-a8eb-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:44:26 np0005598180.novalocal sudo[6864]: pam_unix(sudo:session): session closed for user root
Jan 27 21:44:27 np0005598180.novalocal python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-af3b-a8eb-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 27 21:44:28 np0005598180.novalocal python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:29 np0005598180.novalocal sshd-session[6924]: Invalid user ubuntu from 92.118.39.56 port 43938
Jan 27 21:44:29 np0005598180.novalocal sshd-session[6924]: Connection closed by invalid user ubuntu 92.118.39.56 port 43938 [preauth]
Jan 27 21:44:46 np0005598180.novalocal sudo[6949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pamdsrjwpikqrupgoxgxgtynsnwafdme ; /usr/bin/python3'
Jan 27 21:44:46 np0005598180.novalocal sudo[6949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:44:46 np0005598180.novalocal python3[6951]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:44:46 np0005598180.novalocal sudo[6949]: pam_unix(sudo:session): session closed for user root
Jan 27 21:44:50 np0005598180.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 21:44:55 np0005598180.novalocal sshd-session[6954]: Invalid user sol from 193.32.162.146 port 49652
Jan 27 21:44:55 np0005598180.novalocal sshd-session[6954]: Connection closed by invalid user sol 193.32.162.146 port 49652 [preauth]
Jan 27 21:45:20 np0005598180.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 27 21:45:20 np0005598180.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 27 21:45:20 np0005598180.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 27 21:45:20 np0005598180.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 27 21:45:20 np0005598180.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 27 21:45:20 np0005598180.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 27 21:45:20 np0005598180.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 27 21:45:20 np0005598180.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 27 21:45:20 np0005598180.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 27 21:45:20 np0005598180.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 27 21:45:20 np0005598180.novalocal NetworkManager[854]: <info>  [1769550320.8370] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 21:45:20 np0005598180.novalocal systemd-udevd[6956]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 21:45:20 np0005598180.novalocal NetworkManager[854]: <info>  [1769550320.8638] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 21:45:20 np0005598180.novalocal NetworkManager[854]: <info>  [1769550320.8688] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 27 21:45:20 np0005598180.novalocal NetworkManager[854]: <info>  [1769550320.8696] device (eth1): carrier: link connected
Jan 27 21:45:20 np0005598180.novalocal NetworkManager[854]: <info>  [1769550320.8700] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 27 21:45:20 np0005598180.novalocal NetworkManager[854]: <info>  [1769550320.8714] policy: auto-activating connection 'Wired connection 1' (cd54289b-cdde-3d56-a906-8e89599c3435)
Jan 27 21:45:20 np0005598180.novalocal NetworkManager[854]: <info>  [1769550320.8722] device (eth1): Activation: starting connection 'Wired connection 1' (cd54289b-cdde-3d56-a906-8e89599c3435)
Jan 27 21:45:20 np0005598180.novalocal NetworkManager[854]: <info>  [1769550320.8724] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 21:45:20 np0005598180.novalocal NetworkManager[854]: <info>  [1769550320.8732] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 21:45:20 np0005598180.novalocal NetworkManager[854]: <info>  [1769550320.8741] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 21:45:20 np0005598180.novalocal NetworkManager[854]: <info>  [1769550320.8751] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 21:45:21 np0005598180.novalocal python3[6983]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-959c-7a02-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:45:28 np0005598180.novalocal sudo[7061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tucojffryztgpoaapbphjoxdzgoesyps ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 21:45:28 np0005598180.novalocal sudo[7061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:45:28 np0005598180.novalocal python3[7063]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 21:45:28 np0005598180.novalocal sudo[7061]: pam_unix(sudo:session): session closed for user root
Jan 27 21:45:29 np0005598180.novalocal sudo[7134]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uufolictmcrswiltytbkvjiwadvzhtgu ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 21:45:29 np0005598180.novalocal sudo[7134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:45:29 np0005598180.novalocal python3[7136]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769550328.4741447-102-108474347669072/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=d01008bd0efd0fee34afb24261c6cbe94850c2ec backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:45:29 np0005598180.novalocal sudo[7134]: pam_unix(sudo:session): session closed for user root
Jan 27 21:45:29 np0005598180.novalocal sudo[7184]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bogxzdlzttosryxkmwdsqdmcemqihuej ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 21:45:29 np0005598180.novalocal sudo[7184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:45:30 np0005598180.novalocal python3[7186]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: Stopping Network Manager...
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[854]: <info>  [1769550330.1558] caught SIGTERM, shutting down normally.
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[854]: <info>  [1769550330.1579] dhcp4 (eth0): canceled DHCP transaction
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[854]: <info>  [1769550330.1579] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[854]: <info>  [1769550330.1579] dhcp4 (eth0): state changed no lease
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[854]: <info>  [1769550330.1584] manager: NetworkManager state is now CONNECTING
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[854]: <info>  [1769550330.1648] dhcp4 (eth1): canceled DHCP transaction
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[854]: <info>  [1769550330.1648] dhcp4 (eth1): state changed no lease
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[854]: <info>  [1769550330.1719] exiting (success)
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: Stopped Network Manager.
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: NetworkManager.service: Consumed 1.494s CPU time, 9.9M memory peak.
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: Starting Network Manager...
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.2424] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:b296a529-9762-4dd6-b2a2-416e3ccb95c7)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.2428] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.2498] manager[0x55a241123000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: Starting Hostname Service...
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: Started Hostname Service.
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3403] hostname: hostname: using hostnamed
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3404] hostname: static hostname changed from (none) to "np0005598180.novalocal"
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3413] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3421] manager[0x55a241123000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3422] manager[0x55a241123000]: rfkill: WWAN hardware radio set enabled
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3468] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3468] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3469] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3470] manager: Networking is enabled by state file
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3473] settings: Loaded settings plugin: keyfile (internal)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3480] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3522] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3535] dhcp: init: Using DHCP client 'internal'
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3540] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3548] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3555] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3568] device (lo): Activation: starting connection 'lo' (19c0906f-7bf5-4e0a-9fd9-d1d6accc761b)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3579] device (eth0): carrier: link connected
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3587] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3594] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3594] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3604] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3614] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3624] device (eth1): carrier: link connected
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3631] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3637] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (cd54289b-cdde-3d56-a906-8e89599c3435) (indicated)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3638] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3645] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3655] device (eth1): Activation: starting connection 'Wired connection 1' (cd54289b-cdde-3d56-a906-8e89599c3435)
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: Started Network Manager.
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3665] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3670] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3673] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3676] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3680] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3693] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3696] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3698] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3701] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3706] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3708] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3714] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3716] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3751] dhcp4 (eth0): state changed new lease, address=38.102.83.82
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3756] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 21:45:30 np0005598180.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3817] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3824] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3825] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3830] device (lo): Activation: successful, device activated.
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3846] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3847] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3850] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3853] device (eth0): Activation: successful, device activated.
Jan 27 21:45:30 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550330.3857] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 21:45:30 np0005598180.novalocal sudo[7184]: pam_unix(sudo:session): session closed for user root
Jan 27 21:45:30 np0005598180.novalocal python3[7270]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-959c-7a02-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:45:40 np0005598180.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 21:46:00 np0005598180.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 21:46:13 np0005598180.novalocal systemd[4309]: Starting Mark boot as successful...
Jan 27 21:46:13 np0005598180.novalocal systemd[4309]: Finished Mark boot as successful.
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.3548] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 21:46:15 np0005598180.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 21:46:15 np0005598180.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.3877] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.3880] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.3889] device (eth1): Activation: successful, device activated.
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.3899] manager: startup complete
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.3901] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <warn>  [1769550375.3909] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.3923] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 27 21:46:15 np0005598180.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4028] dhcp4 (eth1): canceled DHCP transaction
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4028] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4028] dhcp4 (eth1): state changed no lease
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4048] policy: auto-activating connection 'ci-private-network' (60c54900-5f38-5c39-a049-82583fdf5947)
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4055] device (eth1): Activation: starting connection 'ci-private-network' (60c54900-5f38-5c39-a049-82583fdf5947)
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4056] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4061] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4072] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4086] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4158] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4161] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 21:46:15 np0005598180.novalocal NetworkManager[7195]: <info>  [1769550375.4171] device (eth1): Activation: successful, device activated.
Jan 27 21:46:25 np0005598180.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 21:46:26 np0005598180.novalocal sudo[7374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcesrnqxfrkfpmvepcxxjtiedefarxuh ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 21:46:26 np0005598180.novalocal sudo[7374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:46:26 np0005598180.novalocal python3[7376]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 21:46:26 np0005598180.novalocal sudo[7374]: pam_unix(sudo:session): session closed for user root
Jan 27 21:46:27 np0005598180.novalocal sudo[7447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsbxixqnfomhfjxskgpsajfnuwojfjhk ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 27 21:46:27 np0005598180.novalocal sudo[7447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:46:27 np0005598180.novalocal python3[7449]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769550386.5936048-259-264715084713049/source _original_basename=tmpqw2py9ei follow=False checksum=d1609b6b28b12d3eb053ebffc8c62c1e62e5817b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:46:27 np0005598180.novalocal sudo[7447]: pam_unix(sudo:session): session closed for user root
Jan 27 21:46:39 np0005598180.novalocal sshd-session[7474]: Invalid user sol from 92.118.39.56 port 59628
Jan 27 21:46:39 np0005598180.novalocal sshd-session[7474]: Connection closed by invalid user sol 92.118.39.56 port 59628 [preauth]
Jan 27 21:47:01 np0005598180.novalocal sshd-session[7476]: Invalid user sol from 193.32.162.146 port 58640
Jan 27 21:47:02 np0005598180.novalocal sshd-session[7476]: Connection closed by invalid user sol 193.32.162.146 port 58640 [preauth]
Jan 27 21:47:27 np0005598180.novalocal sshd-session[4318]: Received disconnect from 38.102.83.114 port 57408:11: disconnected by user
Jan 27 21:47:27 np0005598180.novalocal sshd-session[4318]: Disconnected from user zuul 38.102.83.114 port 57408
Jan 27 21:47:27 np0005598180.novalocal sshd-session[4305]: pam_unix(sshd:session): session closed for user zuul
Jan 27 21:47:27 np0005598180.novalocal systemd-logind[789]: Session 1 logged out. Waiting for processes to exit.
Jan 27 21:48:48 np0005598180.novalocal sshd-session[7479]: Invalid user sol from 92.118.39.56 port 47092
Jan 27 21:48:48 np0005598180.novalocal sshd-session[7479]: Connection closed by invalid user sol 92.118.39.56 port 47092 [preauth]
Jan 27 21:49:09 np0005598180.novalocal sshd-session[7482]: Invalid user solana from 193.32.162.146 port 39428
Jan 27 21:49:09 np0005598180.novalocal sshd-session[7482]: Connection closed by invalid user solana 193.32.162.146 port 39428 [preauth]
Jan 27 21:49:12 np0005598180.novalocal sshd-session[7485]: error: kex_exchange_identification: read: Connection reset by peer
Jan 27 21:49:12 np0005598180.novalocal sshd-session[7485]: Connection reset by 176.120.22.52 port 23263
Jan 27 21:49:13 np0005598180.novalocal systemd[4309]: Created slice User Background Tasks Slice.
Jan 27 21:49:13 np0005598180.novalocal systemd[4309]: Starting Cleanup of User's Temporary Files and Directories...
Jan 27 21:49:13 np0005598180.novalocal systemd[4309]: Finished Cleanup of User's Temporary Files and Directories.
Jan 27 21:49:40 np0005598180.novalocal sshd-session[7489]: banner exchange: Connection from 13.86.105.235 port 46746: invalid format
Jan 27 21:49:49 np0005598180.novalocal sshd-session[7487]: Connection closed by 13.86.105.235 port 46742 [preauth]
Jan 27 21:50:48 np0005598180.novalocal sshd-session[7490]: Invalid user sol from 92.118.39.56 port 34532
Jan 27 21:50:48 np0005598180.novalocal sshd-session[7490]: Connection closed by invalid user sol 92.118.39.56 port 34532 [preauth]
Jan 27 21:51:12 np0005598180.novalocal sshd-session[7493]: Invalid user solana from 193.32.162.146 port 48428
Jan 27 21:51:12 np0005598180.novalocal sshd-session[7493]: Connection closed by invalid user solana 193.32.162.146 port 48428 [preauth]
Jan 27 21:52:51 np0005598180.novalocal sshd-session[7495]: Invalid user solv from 92.118.39.56 port 50222
Jan 27 21:52:51 np0005598180.novalocal sshd-session[7495]: Connection closed by invalid user solv 92.118.39.56 port 50222 [preauth]
Jan 27 21:53:17 np0005598180.novalocal sshd-session[7497]: Invalid user solana from 193.32.162.146 port 57442
Jan 27 21:53:17 np0005598180.novalocal sshd-session[7497]: Connection closed by invalid user solana 193.32.162.146 port 57442 [preauth]
Jan 27 21:53:54 np0005598180.novalocal sshd-session[7501]: Accepted publickey for zuul from 38.102.83.114 port 55852 ssh2: RSA SHA256:ZuKoWm/C8Whnhgf9tPVFWdXLNeFqjD7XfMzDvbUlFFI
Jan 27 21:53:54 np0005598180.novalocal systemd-logind[789]: New session 3 of user zuul.
Jan 27 21:53:54 np0005598180.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 27 21:53:54 np0005598180.novalocal sshd-session[7501]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 21:53:54 np0005598180.novalocal sudo[7528]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gguyjdujcszoyifjhuuzgnivgxznbqkp ; /usr/bin/python3'
Jan 27 21:53:54 np0005598180.novalocal sudo[7528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:53:54 np0005598180.novalocal python3[7530]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-6c5d-9678-000000002185-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:53:54 np0005598180.novalocal sudo[7528]: pam_unix(sudo:session): session closed for user root
Jan 27 21:53:54 np0005598180.novalocal sudo[7557]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzbqjxkocvhfsiddtfbllzymeyoukkxk ; /usr/bin/python3'
Jan 27 21:53:54 np0005598180.novalocal sudo[7557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:53:54 np0005598180.novalocal python3[7559]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:53:54 np0005598180.novalocal sudo[7557]: pam_unix(sudo:session): session closed for user root
Jan 27 21:53:55 np0005598180.novalocal sudo[7583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmcyracnareijrksirlnyqnuxkqbnvgt ; /usr/bin/python3'
Jan 27 21:53:55 np0005598180.novalocal sudo[7583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:53:55 np0005598180.novalocal python3[7585]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:53:55 np0005598180.novalocal sudo[7583]: pam_unix(sudo:session): session closed for user root
Jan 27 21:53:55 np0005598180.novalocal sudo[7609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkbdgxhugdzrewpctztiyvaznxjqcgpo ; /usr/bin/python3'
Jan 27 21:53:55 np0005598180.novalocal sudo[7609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:53:55 np0005598180.novalocal python3[7611]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:53:55 np0005598180.novalocal sudo[7609]: pam_unix(sudo:session): session closed for user root
Jan 27 21:53:55 np0005598180.novalocal sudo[7635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klusjgefbyducvuincqulvdwjmstzkge ; /usr/bin/python3'
Jan 27 21:53:55 np0005598180.novalocal sudo[7635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:53:55 np0005598180.novalocal python3[7637]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:53:55 np0005598180.novalocal sudo[7635]: pam_unix(sudo:session): session closed for user root
Jan 27 21:53:56 np0005598180.novalocal sudo[7661]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upgkkrsbsmzgrlswqoukavnviktjvmqm ; /usr/bin/python3'
Jan 27 21:53:56 np0005598180.novalocal sudo[7661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:53:56 np0005598180.novalocal python3[7663]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:53:56 np0005598180.novalocal sudo[7661]: pam_unix(sudo:session): session closed for user root
Jan 27 21:53:57 np0005598180.novalocal sudo[7739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmvpsmleejlnbvxtptlauteeqceqsddk ; /usr/bin/python3'
Jan 27 21:53:57 np0005598180.novalocal sudo[7739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:53:57 np0005598180.novalocal python3[7741]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 21:53:57 np0005598180.novalocal sudo[7739]: pam_unix(sudo:session): session closed for user root
Jan 27 21:53:57 np0005598180.novalocal sudo[7812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcyweuvtyygxbgnvnnrhdmrlybbpzriy ; /usr/bin/python3'
Jan 27 21:53:57 np0005598180.novalocal sudo[7812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:53:57 np0005598180.novalocal python3[7814]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769550837.032001-513-248123388013910/source _original_basename=tmp9gyl0u8x follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:53:57 np0005598180.novalocal sudo[7812]: pam_unix(sudo:session): session closed for user root
Jan 27 21:53:58 np0005598180.novalocal sudo[7862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tazggsiislwscpmwyyitrahajrtqivxt ; /usr/bin/python3'
Jan 27 21:53:58 np0005598180.novalocal sudo[7862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:53:58 np0005598180.novalocal python3[7864]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 21:53:58 np0005598180.novalocal systemd[1]: Reloading.
Jan 27 21:53:58 np0005598180.novalocal systemd-rc-local-generator[7886]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 21:53:58 np0005598180.novalocal sudo[7862]: pam_unix(sudo:session): session closed for user root
Jan 27 21:54:00 np0005598180.novalocal sudo[7918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfdafgbrkwnrvthxkwjefjxzhknymnwv ; /usr/bin/python3'
Jan 27 21:54:00 np0005598180.novalocal sudo[7918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:54:00 np0005598180.novalocal python3[7920]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 27 21:54:00 np0005598180.novalocal sudo[7918]: pam_unix(sudo:session): session closed for user root
Jan 27 21:54:00 np0005598180.novalocal sudo[7944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxdhtmqibieptnofnovxekyxtimwrqza ; /usr/bin/python3'
Jan 27 21:54:00 np0005598180.novalocal sudo[7944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:54:00 np0005598180.novalocal python3[7946]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:54:00 np0005598180.novalocal sudo[7944]: pam_unix(sudo:session): session closed for user root
Jan 27 21:54:00 np0005598180.novalocal sudo[7972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joukzxmxsinjcbmricuxeiaagydgtscd ; /usr/bin/python3'
Jan 27 21:54:00 np0005598180.novalocal sudo[7972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:54:00 np0005598180.novalocal python3[7974]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:54:00 np0005598180.novalocal sudo[7972]: pam_unix(sudo:session): session closed for user root
Jan 27 21:54:00 np0005598180.novalocal sudo[8000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnnwmacpuazbpnwiwuawpguqczmjxvnb ; /usr/bin/python3'
Jan 27 21:54:00 np0005598180.novalocal sudo[8000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:54:01 np0005598180.novalocal python3[8002]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:54:01 np0005598180.novalocal sudo[8000]: pam_unix(sudo:session): session closed for user root
Jan 27 21:54:01 np0005598180.novalocal sudo[8028]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pupkhdcjotvezdmjdwfeodejzntnpirt ; /usr/bin/python3'
Jan 27 21:54:01 np0005598180.novalocal sudo[8028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:54:01 np0005598180.novalocal python3[8030]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:54:01 np0005598180.novalocal sudo[8028]: pam_unix(sudo:session): session closed for user root
Jan 27 21:54:01 np0005598180.novalocal python3[8057]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-6c5d-9678-00000000218c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:54:02 np0005598180.novalocal python3[8087]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 21:54:04 np0005598180.novalocal sshd-session[7504]: Connection closed by 38.102.83.114 port 55852
Jan 27 21:54:04 np0005598180.novalocal sshd-session[7501]: pam_unix(sshd:session): session closed for user zuul
Jan 27 21:54:04 np0005598180.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 27 21:54:04 np0005598180.novalocal systemd[1]: session-3.scope: Consumed 4.395s CPU time.
Jan 27 21:54:04 np0005598180.novalocal systemd-logind[789]: Session 3 logged out. Waiting for processes to exit.
Jan 27 21:54:04 np0005598180.novalocal systemd-logind[789]: Removed session 3.
Jan 27 21:54:05 np0005598180.novalocal sshd-session[8093]: Accepted publickey for zuul from 38.102.83.114 port 53178 ssh2: RSA SHA256:ZuKoWm/C8Whnhgf9tPVFWdXLNeFqjD7XfMzDvbUlFFI
Jan 27 21:54:05 np0005598180.novalocal systemd-logind[789]: New session 4 of user zuul.
Jan 27 21:54:05 np0005598180.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 27 21:54:05 np0005598180.novalocal sshd-session[8093]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 21:54:06 np0005598180.novalocal sudo[8120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vslfoxheemgnymrexvxlbegswuntcdrx ; /usr/bin/python3'
Jan 27 21:54:06 np0005598180.novalocal sudo[8120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:54:06 np0005598180.novalocal python3[8122]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 27 21:54:57 np0005598180.novalocal sshd-session[8415]: Invalid user solv from 92.118.39.56 port 37676
Jan 27 21:54:57 np0005598180.novalocal sshd-session[8415]: Connection closed by invalid user solv 92.118.39.56 port 37676 [preauth]
Jan 27 21:54:59 np0005598180.novalocal setsebool[8421]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 27 21:54:59 np0005598180.novalocal setsebool[8421]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 27 21:55:10 np0005598180.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 27 21:55:10 np0005598180.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 21:55:10 np0005598180.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 27 21:55:10 np0005598180.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 21:55:10 np0005598180.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 27 21:55:10 np0005598180.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 21:55:10 np0005598180.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 21:55:10 np0005598180.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 21:55:20 np0005598180.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 27 21:55:20 np0005598180.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 21:55:20 np0005598180.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 27 21:55:20 np0005598180.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 21:55:20 np0005598180.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 27 21:55:20 np0005598180.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 21:55:20 np0005598180.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 21:55:20 np0005598180.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 21:55:22 np0005598180.novalocal sshd-session[8459]: Invalid user sol from 193.32.162.146 port 38210
Jan 27 21:55:22 np0005598180.novalocal sshd-session[8459]: Connection closed by invalid user sol 193.32.162.146 port 38210 [preauth]
Jan 27 21:55:38 np0005598180.novalocal dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 27 21:55:39 np0005598180.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 21:55:39 np0005598180.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 27 21:55:39 np0005598180.novalocal systemd[1]: Reloading.
Jan 27 21:55:39 np0005598180.novalocal systemd-rc-local-generator[9197]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 21:55:39 np0005598180.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 21:55:40 np0005598180.novalocal sudo[8120]: pam_unix(sudo:session): session closed for user root
Jan 27 21:55:46 np0005598180.novalocal python3[14031]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-1b55-741c-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 21:55:47 np0005598180.novalocal kernel: evm: overlay not supported
Jan 27 21:55:47 np0005598180.novalocal systemd[4309]: Starting D-Bus User Message Bus...
Jan 27 21:55:47 np0005598180.novalocal dbus-broker-launch[14265]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 27 21:55:47 np0005598180.novalocal dbus-broker-launch[14265]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 27 21:55:47 np0005598180.novalocal systemd[4309]: Started D-Bus User Message Bus.
Jan 27 21:55:47 np0005598180.novalocal dbus-broker-lau[14265]: Ready
Jan 27 21:55:47 np0005598180.novalocal systemd[4309]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 27 21:55:47 np0005598180.novalocal systemd[4309]: Created slice Slice /user.
Jan 27 21:55:47 np0005598180.novalocal systemd[4309]: podman-14246.scope: unit configures an IP firewall, but not running as root.
Jan 27 21:55:47 np0005598180.novalocal systemd[4309]: (This warning is only shown for the first unit using IP firewalling.)
Jan 27 21:55:47 np0005598180.novalocal systemd[4309]: Started podman-14246.scope.
Jan 27 21:55:47 np0005598180.novalocal systemd[4309]: Started podman-pause-e555dac6.scope.
Jan 27 21:55:47 np0005598180.novalocal sudo[14488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhpxlkozfwzmpgxytdljcavegtnhmbjh ; /usr/bin/python3'
Jan 27 21:55:47 np0005598180.novalocal sudo[14488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:55:48 np0005598180.novalocal python3[14499]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.18:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.18:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:55:48 np0005598180.novalocal python3[14499]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 27 21:55:48 np0005598180.novalocal sudo[14488]: pam_unix(sudo:session): session closed for user root
Jan 27 21:55:48 np0005598180.novalocal sshd-session[8096]: Connection closed by 38.102.83.114 port 53178
Jan 27 21:55:48 np0005598180.novalocal sshd-session[8093]: pam_unix(sshd:session): session closed for user zuul
Jan 27 21:55:48 np0005598180.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 27 21:55:48 np0005598180.novalocal systemd[1]: session-4.scope: Consumed 59.433s CPU time.
Jan 27 21:55:48 np0005598180.novalocal systemd-logind[789]: Session 4 logged out. Waiting for processes to exit.
Jan 27 21:55:48 np0005598180.novalocal systemd-logind[789]: Removed session 4.
Jan 27 21:56:06 np0005598180.novalocal sshd-session[20792]: Connection closed by 38.102.83.151 port 58838 [preauth]
Jan 27 21:56:06 np0005598180.novalocal sshd-session[20795]: Connection closed by 38.102.83.151 port 58822 [preauth]
Jan 27 21:56:06 np0005598180.novalocal sshd-session[20798]: Unable to negotiate with 38.102.83.151 port 58870: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 27 21:56:06 np0005598180.novalocal sshd-session[20794]: Unable to negotiate with 38.102.83.151 port 58854: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 27 21:56:06 np0005598180.novalocal sshd-session[20800]: Unable to negotiate with 38.102.83.151 port 58878: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 27 21:56:11 np0005598180.novalocal sshd-session[22186]: Accepted publickey for zuul from 38.102.83.114 port 54766 ssh2: RSA SHA256:ZuKoWm/C8Whnhgf9tPVFWdXLNeFqjD7XfMzDvbUlFFI
Jan 27 21:56:11 np0005598180.novalocal systemd-logind[789]: New session 5 of user zuul.
Jan 27 21:56:11 np0005598180.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 27 21:56:11 np0005598180.novalocal sshd-session[22186]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 21:56:11 np0005598180.novalocal python3[22277]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLcCY6xlne0rWu0NbaL1npQ/Yo2WsMtsaDMfADszqq1MU3VIcH2zl87ygQcpnNIT7e9kKDsIstq/pJ1Bmo6qu0k= zuul@np0005598179.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:56:12 np0005598180.novalocal sudo[22409]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkzdsthpyhfnnifibkoucjvqzlhfwnpg ; /usr/bin/python3'
Jan 27 21:56:12 np0005598180.novalocal sudo[22409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:56:12 np0005598180.novalocal python3[22422]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLcCY6xlne0rWu0NbaL1npQ/Yo2WsMtsaDMfADszqq1MU3VIcH2zl87ygQcpnNIT7e9kKDsIstq/pJ1Bmo6qu0k= zuul@np0005598179.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:56:12 np0005598180.novalocal sudo[22409]: pam_unix(sudo:session): session closed for user root
Jan 27 21:56:12 np0005598180.novalocal sudo[22711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffpwrqeublrecetrokzdmrqlrktiiupn ; /usr/bin/python3'
Jan 27 21:56:12 np0005598180.novalocal sudo[22711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:56:13 np0005598180.novalocal python3[22723]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005598180.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 27 21:56:13 np0005598180.novalocal useradd[22784]: new group: name=cloud-admin, GID=1002
Jan 27 21:56:13 np0005598180.novalocal useradd[22784]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 27 21:56:13 np0005598180.novalocal sudo[22711]: pam_unix(sudo:session): session closed for user root
Jan 27 21:56:13 np0005598180.novalocal sudo[22891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afcoxxjcvmempdvbzocmsbpknxophyvh ; /usr/bin/python3'
Jan 27 21:56:13 np0005598180.novalocal sudo[22891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:56:13 np0005598180.novalocal python3[22902]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLcCY6xlne0rWu0NbaL1npQ/Yo2WsMtsaDMfADszqq1MU3VIcH2zl87ygQcpnNIT7e9kKDsIstq/pJ1Bmo6qu0k= zuul@np0005598179.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 27 21:56:13 np0005598180.novalocal sudo[22891]: pam_unix(sudo:session): session closed for user root
Jan 27 21:56:13 np0005598180.novalocal sudo[23123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eknkrcimtimwkwwnshnakacpaqbkbnpd ; /usr/bin/python3'
Jan 27 21:56:13 np0005598180.novalocal sudo[23123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:56:14 np0005598180.novalocal python3[23133]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 21:56:14 np0005598180.novalocal sudo[23123]: pam_unix(sudo:session): session closed for user root
Jan 27 21:56:14 np0005598180.novalocal sudo[23357]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onvilmddejktouyeifuewwxmnedcxfpj ; /usr/bin/python3'
Jan 27 21:56:14 np0005598180.novalocal sudo[23357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:56:14 np0005598180.novalocal python3[23369]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769550973.7497344-135-103255037640883/source _original_basename=tmpvg6kkbts follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 21:56:14 np0005598180.novalocal sudo[23357]: pam_unix(sudo:session): session closed for user root
Jan 27 21:56:15 np0005598180.novalocal sudo[23668]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qshkciooernilzjvanowcmtyrgnujeol ; /usr/bin/python3'
Jan 27 21:56:15 np0005598180.novalocal sudo[23668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 21:56:15 np0005598180.novalocal python3[23678]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 27 21:56:15 np0005598180.novalocal systemd[1]: Starting Hostname Service...
Jan 27 21:56:15 np0005598180.novalocal systemd[1]: Started Hostname Service.
Jan 27 21:56:15 np0005598180.novalocal systemd-hostnamed[23777]: Changed pretty hostname to 'compute-0'
Jan 27 21:56:15 compute-0 systemd-hostnamed[23777]: Hostname set to <compute-0> (static)
Jan 27 21:56:15 compute-0 NetworkManager[7195]: <info>  [1769550975.5570] hostname: static hostname changed from "np0005598180.novalocal" to "compute-0"
Jan 27 21:56:15 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 21:56:15 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 21:56:15 compute-0 sudo[23668]: pam_unix(sudo:session): session closed for user root
Jan 27 21:56:15 compute-0 sshd-session[22225]: Connection closed by 38.102.83.114 port 54766
Jan 27 21:56:15 compute-0 sshd-session[22186]: pam_unix(sshd:session): session closed for user zuul
Jan 27 21:56:15 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Jan 27 21:56:15 compute-0 systemd[1]: session-5.scope: Consumed 2.653s CPU time.
Jan 27 21:56:15 compute-0 systemd-logind[789]: Session 5 logged out. Waiting for processes to exit.
Jan 27 21:56:15 compute-0 systemd-logind[789]: Removed session 5.
Jan 27 21:56:25 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 21:56:36 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 21:56:36 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 21:56:36 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 9.974s CPU time.
Jan 27 21:56:36 compute-0 systemd[1]: run-re1a19021773144f987642910a4291e88.service: Deactivated successfully.
Jan 27 21:56:45 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 21:57:21 compute-0 sshd-session[30198]: Invalid user solv from 193.32.162.146 port 47216
Jan 27 21:57:21 compute-0 sshd-session[30198]: Connection closed by invalid user solv 193.32.162.146 port 47216 [preauth]
Jan 27 21:57:37 compute-0 sshd-session[30200]: Invalid user AdminGPON from 45.148.10.121 port 60632
Jan 27 21:57:37 compute-0 sshd-session[30200]: Connection closed by invalid user AdminGPON 45.148.10.121 port 60632 [preauth]
Jan 27 21:57:53 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 27 21:57:53 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 27 21:57:53 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 27 21:57:53 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 27 21:59:21 compute-0 sshd-session[30209]: Invalid user solv from 193.32.162.146 port 56212
Jan 27 21:59:21 compute-0 sshd-session[30209]: Connection closed by invalid user solv 193.32.162.146 port 56212 [preauth]
Jan 27 22:00:53 compute-0 sshd-session[30211]: Accepted publickey for zuul from 38.102.83.151 port 46372 ssh2: RSA SHA256:ZuKoWm/C8Whnhgf9tPVFWdXLNeFqjD7XfMzDvbUlFFI
Jan 27 22:00:53 compute-0 systemd-logind[789]: New session 6 of user zuul.
Jan 27 22:00:53 compute-0 systemd[1]: Started Session 6 of User zuul.
Jan 27 22:00:53 compute-0 sshd-session[30211]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:00:54 compute-0 python3[30287]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:00:55 compute-0 sudo[30401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkngtztadzvaaqqourzfqgpygujaqlcq ; /usr/bin/python3'
Jan 27 22:00:55 compute-0 sudo[30401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:00:55 compute-0 python3[30403]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 22:00:55 compute-0 sudo[30401]: pam_unix(sudo:session): session closed for user root
Jan 27 22:00:56 compute-0 sudo[30474]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrbtcnpexzfmeiooybgkjivpajwywayn ; /usr/bin/python3'
Jan 27 22:00:56 compute-0 sudo[30474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:00:56 compute-0 python3[30476]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769551255.4594703-33874-250775347534568/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:00:56 compute-0 sudo[30474]: pam_unix(sudo:session): session closed for user root
Jan 27 22:00:56 compute-0 sudo[30500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnxibgeebvnomevrflwzceblyjmqlswm ; /usr/bin/python3'
Jan 27 22:00:56 compute-0 sudo[30500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:00:57 compute-0 python3[30502]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 22:00:57 compute-0 sudo[30500]: pam_unix(sudo:session): session closed for user root
Jan 27 22:00:57 compute-0 sudo[30573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgrjsxfivpieiqesglqxxfodbuzgvkmp ; /usr/bin/python3'
Jan 27 22:00:57 compute-0 sudo[30573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:00:57 compute-0 python3[30575]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769551255.4594703-33874-250775347534568/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:00:57 compute-0 sudo[30573]: pam_unix(sudo:session): session closed for user root
Jan 27 22:00:57 compute-0 sudo[30599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppudqzeijlwgcexzmhmtvznlivhniuaj ; /usr/bin/python3'
Jan 27 22:00:57 compute-0 sudo[30599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:00:57 compute-0 python3[30601]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 22:00:57 compute-0 sudo[30599]: pam_unix(sudo:session): session closed for user root
Jan 27 22:00:58 compute-0 sudo[30672]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdruwkcnyxcjhwcksrpoealzpmvdyaor ; /usr/bin/python3'
Jan 27 22:00:58 compute-0 sudo[30672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:00:58 compute-0 python3[30674]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769551255.4594703-33874-250775347534568/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:00:58 compute-0 sudo[30672]: pam_unix(sudo:session): session closed for user root
Jan 27 22:00:58 compute-0 sudo[30698]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpiaygqpljmzzjjlszlhxcgyuvcxuwlu ; /usr/bin/python3'
Jan 27 22:00:58 compute-0 sudo[30698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:00:58 compute-0 python3[30700]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 22:00:58 compute-0 sudo[30698]: pam_unix(sudo:session): session closed for user root
Jan 27 22:00:58 compute-0 sudo[30771]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzdusmahhosveivmguoddgqwwxqoemhv ; /usr/bin/python3'
Jan 27 22:00:58 compute-0 sudo[30771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:00:58 compute-0 python3[30773]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769551255.4594703-33874-250775347534568/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:00:58 compute-0 sudo[30771]: pam_unix(sudo:session): session closed for user root
Jan 27 22:00:59 compute-0 sudo[30797]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewctvlrnbfyligsnuaygtxrwfrtqhowg ; /usr/bin/python3'
Jan 27 22:00:59 compute-0 sudo[30797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:00:59 compute-0 python3[30799]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 22:00:59 compute-0 sudo[30797]: pam_unix(sudo:session): session closed for user root
Jan 27 22:00:59 compute-0 sudo[30870]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjnczgiulbywanfutztyjijobhgpmjhs ; /usr/bin/python3'
Jan 27 22:00:59 compute-0 sudo[30870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:00:59 compute-0 python3[30872]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769551255.4594703-33874-250775347534568/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:00:59 compute-0 sudo[30870]: pam_unix(sudo:session): session closed for user root
Jan 27 22:01:00 compute-0 sudo[30896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ampvubjcldnkuoaqxpafwlurbcmxcrrw ; /usr/bin/python3'
Jan 27 22:01:00 compute-0 sudo[30896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:01:00 compute-0 python3[30898]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 22:01:00 compute-0 sudo[30896]: pam_unix(sudo:session): session closed for user root
Jan 27 22:01:00 compute-0 sudo[30969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsrackhhvyucgmjlxhgbyxfsuqwaofvn ; /usr/bin/python3'
Jan 27 22:01:00 compute-0 sudo[30969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:01:00 compute-0 python3[30971]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769551255.4594703-33874-250775347534568/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:01:00 compute-0 sudo[30969]: pam_unix(sudo:session): session closed for user root
Jan 27 22:01:00 compute-0 sudo[30995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivrasxqmhfgkncwbkedvvqlvybzdtimf ; /usr/bin/python3'
Jan 27 22:01:00 compute-0 sudo[30995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:01:00 compute-0 python3[30997]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 27 22:01:00 compute-0 sudo[30995]: pam_unix(sudo:session): session closed for user root
Jan 27 22:01:01 compute-0 CROND[31046]: (root) CMD (run-parts /etc/cron.hourly)
Jan 27 22:01:01 compute-0 run-parts[31054]: (/etc/cron.hourly) starting 0anacron
Jan 27 22:01:01 compute-0 anacron[31078]: Anacron started on 2026-01-27
Jan 27 22:01:01 compute-0 anacron[31078]: Will run job `cron.daily' in 6 min.
Jan 27 22:01:01 compute-0 anacron[31078]: Will run job `cron.weekly' in 26 min.
Jan 27 22:01:01 compute-0 anacron[31078]: Will run job `cron.monthly' in 46 min.
Jan 27 22:01:01 compute-0 anacron[31078]: Jobs will be executed sequentially
Jan 27 22:01:01 compute-0 run-parts[31082]: (/etc/cron.hourly) finished 0anacron
Jan 27 22:01:01 compute-0 CROND[31045]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 27 22:01:01 compute-0 sudo[31083]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alrbevxgiwdchrweaxmpebunypzqxuao ; /usr/bin/python3'
Jan 27 22:01:01 compute-0 sudo[31083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:01:01 compute-0 python3[31085]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769551255.4594703-33874-250775347534568/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:01:01 compute-0 sudo[31083]: pam_unix(sudo:session): session closed for user root
Jan 27 22:01:03 compute-0 sshd-session[31111]: Connection closed by 192.168.122.11 port 60994 [preauth]
Jan 27 22:01:03 compute-0 sshd-session[31110]: Connection closed by 192.168.122.11 port 32778 [preauth]
Jan 27 22:01:03 compute-0 sshd-session[31114]: Unable to negotiate with 192.168.122.11 port 32788: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 27 22:01:03 compute-0 sshd-session[31113]: Unable to negotiate with 192.168.122.11 port 32794: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 27 22:01:03 compute-0 sshd-session[31112]: Unable to negotiate with 192.168.122.11 port 32808: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 27 22:03:05 compute-0 sshd-session[31121]: Received disconnect from 91.224.92.78 port 42054:11:  [preauth]
Jan 27 22:03:05 compute-0 sshd-session[31121]: Disconnected from authenticating user root 91.224.92.78 port 42054 [preauth]
Jan 27 22:03:45 compute-0 python3[31146]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:07:01 compute-0 anacron[31078]: Job `cron.daily' started
Jan 27 22:07:01 compute-0 anacron[31078]: Job `cron.daily' terminated
Jan 27 22:08:45 compute-0 sshd-session[30214]: Received disconnect from 38.102.83.151 port 46372:11: disconnected by user
Jan 27 22:08:45 compute-0 sshd-session[30214]: Disconnected from user zuul 38.102.83.151 port 46372
Jan 27 22:08:45 compute-0 sshd-session[30211]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:08:45 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 27 22:08:45 compute-0 systemd[1]: session-6.scope: Consumed 5.782s CPU time.
Jan 27 22:08:45 compute-0 systemd-logind[789]: Session 6 logged out. Waiting for processes to exit.
Jan 27 22:08:45 compute-0 systemd-logind[789]: Removed session 6.
Jan 27 22:09:59 compute-0 sshd-session[31153]: Connection closed by 193.251.24.160 port 50776
Jan 27 22:10:01 compute-0 sshd-session[31154]: Invalid user a from 193.251.24.160 port 50782
Jan 27 22:10:07 compute-0 sshd-session[31154]: Connection closed by invalid user a 193.251.24.160 port 50782 [preauth]
Jan 27 22:10:28 compute-0 sshd-session[31157]: Received disconnect from 91.224.92.108 port 37002:11:  [preauth]
Jan 27 22:10:28 compute-0 sshd-session[31157]: Disconnected from authenticating user root 91.224.92.108 port 37002 [preauth]
Jan 27 22:15:56 compute-0 sshd-session[31163]: Accepted publickey for zuul from 192.168.122.30 port 45006 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:15:56 compute-0 systemd-logind[789]: New session 7 of user zuul.
Jan 27 22:15:56 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 27 22:15:56 compute-0 sshd-session[31163]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:15:57 compute-0 python3.9[31316]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:15:58 compute-0 sudo[31495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azukpsxsmlwkqnshlbfqwryynhskixpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552157.9984076-27-134138339500813/AnsiballZ_command.py'
Jan 27 22:15:58 compute-0 sudo[31495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:15:58 compute-0 python3.9[31497]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:16:08 compute-0 sudo[31495]: pam_unix(sudo:session): session closed for user root
Jan 27 22:16:08 compute-0 sshd-session[31166]: Connection closed by 192.168.122.30 port 45006
Jan 27 22:16:08 compute-0 sshd-session[31163]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:16:08 compute-0 systemd-logind[789]: Session 7 logged out. Waiting for processes to exit.
Jan 27 22:16:08 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 27 22:16:08 compute-0 systemd[1]: session-7.scope: Consumed 8.108s CPU time.
Jan 27 22:16:08 compute-0 systemd-logind[789]: Removed session 7.
Jan 27 22:16:14 compute-0 sshd-session[31555]: Accepted publickey for zuul from 192.168.122.30 port 60182 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:16:14 compute-0 systemd-logind[789]: New session 8 of user zuul.
Jan 27 22:16:14 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 27 22:16:14 compute-0 sshd-session[31555]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:16:15 compute-0 python3.9[31708]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:16:16 compute-0 sshd-session[31558]: Connection closed by 192.168.122.30 port 60182
Jan 27 22:16:16 compute-0 sshd-session[31555]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:16:16 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 27 22:16:16 compute-0 systemd-logind[789]: Session 8 logged out. Waiting for processes to exit.
Jan 27 22:16:16 compute-0 systemd-logind[789]: Removed session 8.
Jan 27 22:16:31 compute-0 sshd-session[31737]: Accepted publickey for zuul from 192.168.122.30 port 43340 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:16:31 compute-0 systemd-logind[789]: New session 9 of user zuul.
Jan 27 22:16:31 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 27 22:16:31 compute-0 sshd-session[31737]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:16:32 compute-0 python3.9[31890]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 27 22:16:33 compute-0 python3.9[32064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:16:34 compute-0 sudo[32214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpsgneonftusftvoohzvqoxxfevjwyom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552193.825836-40-244020343994978/AnsiballZ_command.py'
Jan 27 22:16:34 compute-0 sudo[32214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:16:34 compute-0 python3.9[32216]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:16:34 compute-0 sudo[32214]: pam_unix(sudo:session): session closed for user root
Jan 27 22:16:35 compute-0 sudo[32367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcjdjygpqniydiqehvsiitgsggopdjbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552194.8084033-52-100882787221239/AnsiballZ_stat.py'
Jan 27 22:16:35 compute-0 sudo[32367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:16:35 compute-0 python3.9[32369]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:16:35 compute-0 sudo[32367]: pam_unix(sudo:session): session closed for user root
Jan 27 22:16:36 compute-0 sudo[32519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwtrbnezarrldklbdosufrxuypmtgpiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552195.695745-60-253625483851636/AnsiballZ_file.py'
Jan 27 22:16:36 compute-0 sudo[32519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:16:36 compute-0 python3.9[32521]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:16:36 compute-0 sudo[32519]: pam_unix(sudo:session): session closed for user root
Jan 27 22:16:36 compute-0 sudo[32671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xugetoarkbqjcvqhjfegxrbchvktmpcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552196.5400007-68-161004931919455/AnsiballZ_stat.py'
Jan 27 22:16:36 compute-0 sudo[32671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:16:37 compute-0 python3.9[32673]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:16:37 compute-0 sudo[32671]: pam_unix(sudo:session): session closed for user root
Jan 27 22:16:37 compute-0 sudo[32794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkkwkkpmzruelkjuttiikdhztyankrhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552196.5400007-68-161004931919455/AnsiballZ_copy.py'
Jan 27 22:16:37 compute-0 sudo[32794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:16:37 compute-0 python3.9[32796]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552196.5400007-68-161004931919455/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:16:37 compute-0 sudo[32794]: pam_unix(sudo:session): session closed for user root
Jan 27 22:16:38 compute-0 sudo[32946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpfljpxjmmwinabzudqtjcfkcudpoasf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552197.9905179-83-151384901532986/AnsiballZ_setup.py'
Jan 27 22:16:38 compute-0 sudo[32946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:16:38 compute-0 python3.9[32948]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:16:38 compute-0 sudo[32946]: pam_unix(sudo:session): session closed for user root
Jan 27 22:16:39 compute-0 sudo[33102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-higbukwdnibnfnpubnbfzhokozlzikgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552199.0042517-91-165415997449892/AnsiballZ_file.py'
Jan 27 22:16:39 compute-0 sudo[33102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:16:39 compute-0 python3.9[33104]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:16:39 compute-0 sudo[33102]: pam_unix(sudo:session): session closed for user root
Jan 27 22:16:40 compute-0 sudo[33254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcdornsmlttjbnnzvdfkutvtmnyfpvil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552199.7089612-100-128203876621516/AnsiballZ_file.py'
Jan 27 22:16:40 compute-0 sudo[33254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:16:40 compute-0 python3.9[33256]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:16:40 compute-0 sudo[33254]: pam_unix(sudo:session): session closed for user root
Jan 27 22:16:41 compute-0 python3.9[33406]: ansible-ansible.builtin.service_facts Invoked
Jan 27 22:16:47 compute-0 python3.9[33659]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:16:47 compute-0 python3.9[33809]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:16:49 compute-0 python3.9[33963]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:16:49 compute-0 sudo[34119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkaommrheraxjarvfnyfkjwvkbdjjmal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552209.6264246-148-36468144835492/AnsiballZ_setup.py'
Jan 27 22:16:49 compute-0 sudo[34119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:16:50 compute-0 python3.9[34121]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:16:50 compute-0 sudo[34119]: pam_unix(sudo:session): session closed for user root
Jan 27 22:16:51 compute-0 sudo[34203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frrqzzbfhwvsqlzsoqxhmdposnriobno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552209.6264246-148-36468144835492/AnsiballZ_dnf.py'
Jan 27 22:16:51 compute-0 sudo[34203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:16:51 compute-0 python3.9[34205]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:17:47 compute-0 sshd-session[34494]: Received disconnect from 91.224.92.54 port 63310:11:  [preauth]
Jan 27 22:17:47 compute-0 sshd-session[34494]: Disconnected from authenticating user root 91.224.92.54 port 63310 [preauth]
Jan 27 22:17:58 compute-0 systemd[1]: Reloading.
Jan 27 22:17:58 compute-0 systemd-rc-local-generator[34551]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:17:58 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 27 22:17:59 compute-0 systemd[1]: Reloading.
Jan 27 22:17:59 compute-0 systemd-rc-local-generator[34594]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:17:59 compute-0 systemd[1]: Starting dnf makecache...
Jan 27 22:17:59 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 27 22:17:59 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 27 22:17:59 compute-0 systemd[1]: Reloading.
Jan 27 22:17:59 compute-0 systemd-rc-local-generator[34632]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:17:59 compute-0 dnf[34602]: Failed determining last makecache time.
Jan 27 22:17:59 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-openstack-barbican-42b4c41831408a8e323 128 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 149 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-openstack-cinder-1c00d6490d88e436f26ef 143 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-python-stevedore-c4acc5639fd2329372142 152 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-python-cloudkitty-tests-tempest-2c80f8 152 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-os-refresh-config-9bfc52b5049be2d8de61 179 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 27 22:17:59 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 27 22:17:59 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 169 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-python-designate-tests-tempest-347fdbc 154 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-openstack-glance-1fd12c29b339f30fe823e 130 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 153 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-openstack-manila-3c01b7181572c95dac462 154 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-python-whitebox-neutron-tests-tempest- 150 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-openstack-octavia-ba397f07a7331190208c 145 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-openstack-watcher-c014f81a8647287f6dcc 168 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-ansible-config_template-5ccaa22121a7ff 158 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 153 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-openstack-swift-dc98a8463506ac520c469a 165 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-python-tempestconf-8515371b7cceebd4282 147 kB/s | 3.0 kB     00:00
Jan 27 22:17:59 compute-0 dnf[34602]: delorean-openstack-heat-ui-013accbfd179753bc3f0 146 kB/s | 3.0 kB     00:00
Jan 27 22:18:00 compute-0 dnf[34602]: CentOS Stream 9 - BaseOS                         30 kB/s | 6.7 kB     00:00
Jan 27 22:18:00 compute-0 dnf[34602]: CentOS Stream 9 - AppStream                      68 kB/s | 6.8 kB     00:00
Jan 27 22:18:00 compute-0 dnf[34602]: CentOS Stream 9 - CRB                            64 kB/s | 6.6 kB     00:00
Jan 27 22:18:00 compute-0 dnf[34602]: CentOS Stream 9 - Extras packages                75 kB/s | 7.3 kB     00:00
Jan 27 22:18:00 compute-0 dnf[34602]: dlrn-antelope-testing                           144 kB/s | 3.0 kB     00:00
Jan 27 22:18:00 compute-0 dnf[34602]: dlrn-antelope-build-deps                        151 kB/s | 3.0 kB     00:00
Jan 27 22:18:00 compute-0 dnf[34602]: centos9-rabbitmq                                107 kB/s | 3.0 kB     00:00
Jan 27 22:18:00 compute-0 dnf[34602]: centos9-storage                                  81 kB/s | 3.0 kB     00:00
Jan 27 22:18:00 compute-0 dnf[34602]: centos9-opstools                                118 kB/s | 3.0 kB     00:00
Jan 27 22:18:00 compute-0 dnf[34602]: NFV SIG OpenvSwitch                             116 kB/s | 3.0 kB     00:00
Jan 27 22:18:00 compute-0 dnf[34602]: repo-setup-centos-appstream                      95 kB/s | 4.4 kB     00:00
Jan 27 22:18:01 compute-0 dnf[34602]: repo-setup-centos-baseos                        124 kB/s | 3.9 kB     00:00
Jan 27 22:18:01 compute-0 dnf[34602]: repo-setup-centos-highavailability              156 kB/s | 3.9 kB     00:00
Jan 27 22:18:01 compute-0 dnf[34602]: repo-setup-centos-powertools                    154 kB/s | 4.3 kB     00:00
Jan 27 22:18:01 compute-0 dnf[34602]: Extra Packages for Enterprise Linux 9 - x86_64  227 kB/s |  31 kB     00:00
Jan 27 22:18:01 compute-0 dnf[34602]: Metadata cache created.
Jan 27 22:18:01 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 27 22:18:01 compute-0 systemd[1]: Finished dnf makecache.
Jan 27 22:18:01 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.735s CPU time.
Jan 27 22:18:58 compute-0 kernel: SELinux:  Converting 2725 SID table entries...
Jan 27 22:18:58 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 22:18:58 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 22:18:58 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 22:18:58 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 22:18:58 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 22:18:58 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 22:18:58 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 22:18:59 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 27 22:18:59 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 22:18:59 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 22:18:59 compute-0 systemd[1]: Reloading.
Jan 27 22:18:59 compute-0 systemd-rc-local-generator[34991]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:18:59 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 22:18:59 compute-0 sudo[34203]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:00 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 22:19:00 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 22:19:00 compute-0 systemd[1]: run-re63e83742b9442cfa1fed11c52f8c02b.service: Deactivated successfully.
Jan 27 22:19:00 compute-0 sudo[35903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwitrhzgzordxjxzzgsmnkkspceruodx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552340.1571412-160-2522985645721/AnsiballZ_command.py'
Jan 27 22:19:00 compute-0 sudo[35903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:00 compute-0 python3.9[35905]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:19:01 compute-0 sudo[35903]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:02 compute-0 sudo[36184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rysvwiqputqamqvdcbldnlozqvnldhok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552341.7287495-168-177986836753086/AnsiballZ_selinux.py'
Jan 27 22:19:02 compute-0 sudo[36184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:02 compute-0 python3.9[36186]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 27 22:19:02 compute-0 sudo[36184]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:03 compute-0 sudo[36336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqpaohnifmaoauqewtsnvgpicaydkjsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552342.938349-179-3806965482515/AnsiballZ_command.py'
Jan 27 22:19:03 compute-0 sudo[36336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:03 compute-0 python3.9[36338]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 27 22:19:04 compute-0 sudo[36336]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:04 compute-0 sudo[36489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzermxtfaqkfwgjpacmwbbrdxhktrgvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552344.4978623-187-153659882642250/AnsiballZ_file.py'
Jan 27 22:19:04 compute-0 sudo[36489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:05 compute-0 python3.9[36491]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:19:05 compute-0 sudo[36489]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:06 compute-0 sudo[36642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncwonijghtehercuvztwjmlciotnadhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552345.654551-195-271261825335374/AnsiballZ_mount.py'
Jan 27 22:19:06 compute-0 sudo[36642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:06 compute-0 python3.9[36644]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 27 22:19:06 compute-0 sudo[36642]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:07 compute-0 sudo[36794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkircmyldxwygvzykjrpbcwwzfftaltx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552347.208315-223-257446017107410/AnsiballZ_file.py'
Jan 27 22:19:07 compute-0 sudo[36794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:07 compute-0 python3.9[36796]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:19:07 compute-0 sudo[36794]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:08 compute-0 sudo[36946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pucqayjfxiwfqjexndeshnmuureiksjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552347.8778389-231-126725038184463/AnsiballZ_stat.py'
Jan 27 22:19:08 compute-0 sudo[36946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:08 compute-0 python3.9[36948]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:19:08 compute-0 sudo[36946]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:08 compute-0 sudo[37069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yejyvupykoizdzqyjwdedensamnzmpih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552347.8778389-231-126725038184463/AnsiballZ_copy.py'
Jan 27 22:19:08 compute-0 sudo[37069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:08 compute-0 python3.9[37071]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552347.8778389-231-126725038184463/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=599e206bd571b4f5a31985a590e147a0494141e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:19:08 compute-0 sudo[37069]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:09 compute-0 sudo[37221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asrhdrlylbbsyayzkywhdaqidnuqdpew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552349.3968165-255-199667477846410/AnsiballZ_stat.py'
Jan 27 22:19:09 compute-0 sudo[37221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:12 compute-0 python3.9[37223]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:19:12 compute-0 sudo[37221]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:12 compute-0 sudo[37373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qknhnklupkhijptpzocttqnyozygajdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552352.6059523-263-168482812385265/AnsiballZ_command.py'
Jan 27 22:19:12 compute-0 sudo[37373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:13 compute-0 python3.9[37375]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:19:13 compute-0 sudo[37373]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:13 compute-0 sudo[37526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keynqluxbgvbivqdwevcgpohbyljikdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552353.2796838-271-134170700335254/AnsiballZ_file.py'
Jan 27 22:19:13 compute-0 sudo[37526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:13 compute-0 python3.9[37528]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:19:13 compute-0 sudo[37526]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:14 compute-0 sudo[37678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpyxldnfpqmibduklvwgpyuvnsdnhjmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552354.2367992-282-58418494568115/AnsiballZ_getent.py'
Jan 27 22:19:14 compute-0 sudo[37678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:14 compute-0 python3.9[37680]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 27 22:19:14 compute-0 sudo[37678]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:14 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 22:19:14 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 22:19:15 compute-0 sudo[37832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stzqsfuwwpjkzudusvxknttkutddrehi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552355.1482103-290-214703193627558/AnsiballZ_group.py'
Jan 27 22:19:15 compute-0 sudo[37832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:15 compute-0 python3.9[37834]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 22:19:15 compute-0 groupadd[37835]: group added to /etc/group: name=qemu, GID=107
Jan 27 22:19:15 compute-0 groupadd[37835]: group added to /etc/gshadow: name=qemu
Jan 27 22:19:15 compute-0 groupadd[37835]: new group: name=qemu, GID=107
Jan 27 22:19:15 compute-0 sudo[37832]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:16 compute-0 sudo[37990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwbpnlkeyrixvzdrvjhdgyzthxgglekp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552356.1653197-298-28474104642946/AnsiballZ_user.py'
Jan 27 22:19:16 compute-0 sudo[37990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:16 compute-0 python3.9[37992]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 22:19:16 compute-0 useradd[37994]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 22:19:17 compute-0 sudo[37990]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:17 compute-0 sudo[38150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eovmbeoweoydsfewijsywdjijnzpgofu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552357.1744664-306-153667070289622/AnsiballZ_getent.py'
Jan 27 22:19:17 compute-0 sudo[38150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:17 compute-0 python3.9[38152]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 27 22:19:17 compute-0 sudo[38150]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:18 compute-0 sudo[38303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtjeuclltsoagbkszamhgkoitrofqsti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552357.7518063-314-126475851153132/AnsiballZ_group.py'
Jan 27 22:19:18 compute-0 sudo[38303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:18 compute-0 python3.9[38305]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 22:19:18 compute-0 groupadd[38306]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 27 22:19:18 compute-0 groupadd[38306]: group added to /etc/gshadow: name=hugetlbfs
Jan 27 22:19:18 compute-0 groupadd[38306]: new group: name=hugetlbfs, GID=42477
Jan 27 22:19:18 compute-0 sudo[38303]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:18 compute-0 sudo[38461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mapwggkbkqgkegrcyagkaxggvmwiakev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552358.4950867-323-934384394879/AnsiballZ_file.py'
Jan 27 22:19:18 compute-0 sudo[38461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:19 compute-0 python3.9[38463]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 27 22:19:19 compute-0 sudo[38461]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:19 compute-0 sudo[38613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxhxnkdbnawlrjbjqkwgwheknesjoawr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552359.309816-334-64815734543364/AnsiballZ_dnf.py'
Jan 27 22:19:19 compute-0 sudo[38613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:19 compute-0 python3.9[38615]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:19:21 compute-0 sudo[38613]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:22 compute-0 sudo[38767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gllzejfgcddwnmsvmoeuuqsixhjhkeqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552361.7502794-342-19738505827481/AnsiballZ_file.py'
Jan 27 22:19:22 compute-0 sudo[38767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:22 compute-0 python3.9[38769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:19:22 compute-0 sudo[38767]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:22 compute-0 sudo[38919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdrgpdflbduziugghajywvpfyxyjoupi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552362.4587195-350-243469477139621/AnsiballZ_stat.py'
Jan 27 22:19:22 compute-0 sudo[38919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:22 compute-0 python3.9[38921]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:19:22 compute-0 sudo[38919]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:23 compute-0 sudo[39042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofzyudmvherlnsmdxlwuqwplphvodzha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552362.4587195-350-243469477139621/AnsiballZ_copy.py'
Jan 27 22:19:23 compute-0 sudo[39042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:23 compute-0 python3.9[39044]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552362.4587195-350-243469477139621/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:19:23 compute-0 sudo[39042]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:24 compute-0 sudo[39194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvuaiazxdulultfyeuyzschkhawporzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552363.6994011-365-219111094044509/AnsiballZ_systemd.py'
Jan 27 22:19:24 compute-0 sudo[39194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:24 compute-0 python3.9[39196]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:19:24 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 22:19:24 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 27 22:19:24 compute-0 kernel: Bridge firewalling registered
Jan 27 22:19:24 compute-0 systemd-modules-load[39200]: Inserted module 'br_netfilter'
Jan 27 22:19:24 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 22:19:24 compute-0 sudo[39194]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:25 compute-0 sudo[39353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppyvpyncluojloihvjnpvjvwylyqemqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552365.109471-373-257796604025962/AnsiballZ_stat.py'
Jan 27 22:19:25 compute-0 sudo[39353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:25 compute-0 python3.9[39355]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:19:25 compute-0 sudo[39353]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:25 compute-0 sudo[39476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svtvzojvfiebbfmcdnngxxpehuzktjem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552365.109471-373-257796604025962/AnsiballZ_copy.py'
Jan 27 22:19:25 compute-0 sudo[39476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:26 compute-0 python3.9[39478]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552365.109471-373-257796604025962/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:19:26 compute-0 sudo[39476]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:26 compute-0 sudo[39628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbxkoezeczsjofduhjtiitxesijdxuku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552366.3956904-391-229558514634250/AnsiballZ_dnf.py'
Jan 27 22:19:26 compute-0 sudo[39628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:26 compute-0 python3.9[39630]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:19:31 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 27 22:19:31 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 27 22:19:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 22:19:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 22:19:31 compute-0 systemd[1]: Reloading.
Jan 27 22:19:31 compute-0 systemd-rc-local-generator[39699]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:19:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 22:19:32 compute-0 sudo[39628]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:33 compute-0 python3.9[40907]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:19:34 compute-0 python3.9[41708]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 27 22:19:34 compute-0 python3.9[42470]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:19:35 compute-0 sudo[43255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjlstiqzloeihmzpvbsyhqspvzbbchbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552375.122213-430-221476187117662/AnsiballZ_command.py'
Jan 27 22:19:35 compute-0 sudo[43255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:35 compute-0 python3.9[43277]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:19:35 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 22:19:36 compute-0 systemd[1]: Starting Authorization Manager...
Jan 27 22:19:36 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 22:19:36 compute-0 polkitd[44023]: Started polkitd version 0.117
Jan 27 22:19:36 compute-0 polkitd[44023]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 22:19:36 compute-0 polkitd[44023]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 22:19:36 compute-0 polkitd[44023]: Finished loading, compiling and executing 2 rules
Jan 27 22:19:36 compute-0 systemd[1]: Started Authorization Manager.
Jan 27 22:19:36 compute-0 polkitd[44023]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 27 22:19:36 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 22:19:36 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 22:19:36 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.436s CPU time.
Jan 27 22:19:36 compute-0 systemd[1]: run-r2ce024ca18f64ca5a5f1c26b74e6c6d9.service: Deactivated successfully.
Jan 27 22:19:36 compute-0 sudo[43255]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:36 compute-0 sudo[44192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmacjlnlonrsvgtbgfsbolzhymhtowyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552376.4304118-439-149827906410833/AnsiballZ_systemd.py'
Jan 27 22:19:36 compute-0 sudo[44192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:37 compute-0 python3.9[44194]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:19:37 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 27 22:19:37 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 27 22:19:37 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 27 22:19:37 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 27 22:19:37 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 27 22:19:37 compute-0 sudo[44192]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:38 compute-0 python3.9[44356]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 27 22:19:40 compute-0 sudo[44506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abeqbthskjwugqizfcutgwnrypenpgum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552379.675031-496-60741728453942/AnsiballZ_systemd.py'
Jan 27 22:19:40 compute-0 sudo[44506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:40 compute-0 python3.9[44508]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:19:40 compute-0 systemd[1]: Reloading.
Jan 27 22:19:40 compute-0 systemd-rc-local-generator[44538]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:19:40 compute-0 sudo[44506]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:41 compute-0 sudo[44695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzyrnacggiabwolmfpldxxvcasqogjus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552380.801056-496-231283025613822/AnsiballZ_systemd.py'
Jan 27 22:19:41 compute-0 sudo[44695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:41 compute-0 python3.9[44697]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:19:41 compute-0 systemd[1]: Reloading.
Jan 27 22:19:41 compute-0 systemd-rc-local-generator[44730]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:19:41 compute-0 sudo[44695]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:42 compute-0 sudo[44885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scjdjrxwzrqlfdmjgalzszpfrcjdmphl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552382.005743-512-114839879457138/AnsiballZ_command.py'
Jan 27 22:19:42 compute-0 sudo[44885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:42 compute-0 python3.9[44887]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:19:42 compute-0 sudo[44885]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:43 compute-0 sudo[45038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxcpqsfjtsbkmhokgrzijsqinfjrvmfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552382.839376-520-8924674394717/AnsiballZ_command.py'
Jan 27 22:19:43 compute-0 sudo[45038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:43 compute-0 python3.9[45040]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:19:43 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 27 22:19:43 compute-0 sudo[45038]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:43 compute-0 sudo[45191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-febxidyqfngeizlajnbfbuwnstxouugi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552383.4189453-528-149644670760724/AnsiballZ_command.py'
Jan 27 22:19:43 compute-0 sudo[45191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:43 compute-0 python3.9[45193]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:19:45 compute-0 sudo[45191]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:45 compute-0 sudo[45353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dstqxcjnapdcttxzctsrfbjniqqfqmzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552385.3128912-536-250086361094111/AnsiballZ_command.py'
Jan 27 22:19:45 compute-0 sudo[45353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:45 compute-0 python3.9[45355]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:19:45 compute-0 sudo[45353]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:46 compute-0 sudo[45506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-popktvtdhcvhuddptuorbctvojayexwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552385.9094143-544-103649984394795/AnsiballZ_systemd.py'
Jan 27 22:19:46 compute-0 sudo[45506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:46 compute-0 python3.9[45508]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:19:46 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 27 22:19:46 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 27 22:19:46 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 27 22:19:46 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 27 22:19:46 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 27 22:19:46 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 27 22:19:46 compute-0 sudo[45506]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:46 compute-0 sshd-session[31740]: Connection closed by 192.168.122.30 port 43340
Jan 27 22:19:46 compute-0 sshd-session[31737]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:19:46 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 27 22:19:46 compute-0 systemd[1]: session-9.scope: Consumed 2min 16.228s CPU time.
Jan 27 22:19:46 compute-0 systemd-logind[789]: Session 9 logged out. Waiting for processes to exit.
Jan 27 22:19:46 compute-0 systemd-logind[789]: Removed session 9.
Jan 27 22:19:52 compute-0 sshd-session[45538]: Accepted publickey for zuul from 192.168.122.30 port 55866 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:19:52 compute-0 systemd-logind[789]: New session 10 of user zuul.
Jan 27 22:19:52 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 27 22:19:52 compute-0 sshd-session[45538]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:19:53 compute-0 python3.9[45691]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:19:54 compute-0 python3.9[45845]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:19:55 compute-0 sudo[45999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amazuustzkjroyepmmvmfotpwbuuwcyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552395.1884346-45-135794490087573/AnsiballZ_command.py'
Jan 27 22:19:55 compute-0 sudo[45999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:55 compute-0 python3.9[46001]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:19:55 compute-0 sudo[45999]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:56 compute-0 python3.9[46152]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:19:57 compute-0 sudo[46306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kccnguxdclnjcaxpkhqtpdbdxegxnpad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552397.107004-65-13334638401665/AnsiballZ_setup.py'
Jan 27 22:19:57 compute-0 sudo[46306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:57 compute-0 python3.9[46308]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:19:57 compute-0 sudo[46306]: pam_unix(sudo:session): session closed for user root
Jan 27 22:19:58 compute-0 sudo[46390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aigsxkziiujtkfovcyflqkbaluvetrci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552397.107004-65-13334638401665/AnsiballZ_dnf.py'
Jan 27 22:19:58 compute-0 sudo[46390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:19:58 compute-0 python3.9[46392]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:19:59 compute-0 sudo[46390]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:00 compute-0 sudo[46543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzftwurzscildkcjoepqtbotxrrisimw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552399.8671408-77-37412005605207/AnsiballZ_setup.py'
Jan 27 22:20:00 compute-0 sudo[46543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:00 compute-0 python3.9[46545]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:20:00 compute-0 sudo[46543]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:01 compute-0 sudo[46714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyxqxdqzhshyxvyugstualatukukusuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552400.7009385-88-60896722772562/AnsiballZ_file.py'
Jan 27 22:20:01 compute-0 sudo[46714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:01 compute-0 python3.9[46716]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:20:01 compute-0 sudo[46714]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:01 compute-0 sudo[46866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcorznwwmfydjcrazjtelrzlpoigawtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552401.4552667-96-197787125694794/AnsiballZ_command.py'
Jan 27 22:20:01 compute-0 sudo[46866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:01 compute-0 python3.9[46868]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat981194344-merged.mount: Deactivated successfully.
Jan 27 22:20:02 compute-0 podman[46869]: 2026-01-27 22:20:02.031046216 +0000 UTC m=+0.064733952 system refresh
Jan 27 22:20:02 compute-0 sudo[46866]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:02 compute-0 sudo[47029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmfeaxpnzzqbifqgogphpgvgofyldiut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552402.2245023-104-105020337308978/AnsiballZ_stat.py'
Jan 27 22:20:02 compute-0 sudo[47029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:02 compute-0 python3.9[47031]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:20:02 compute-0 sudo[47029]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:20:03 compute-0 sudo[47152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmyxyqxihowysyjgsolothpdlfjdmoak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552402.2245023-104-105020337308978/AnsiballZ_copy.py'
Jan 27 22:20:03 compute-0 sudo[47152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:03 compute-0 python3.9[47154]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552402.2245023-104-105020337308978/.source.json follow=False _original_basename=podman_network_config.j2 checksum=7d355343d7d3c26f55415f6de42898d889b9a466 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:20:03 compute-0 sudo[47152]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:03 compute-0 sudo[47304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-likqvbuftagwssjzngloltxgnjdhdeem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552403.6857963-119-213779726930759/AnsiballZ_stat.py'
Jan 27 22:20:03 compute-0 sudo[47304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:04 compute-0 python3.9[47306]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:20:04 compute-0 sudo[47304]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:04 compute-0 sudo[47427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvhekktsqzxeqfnnhoyhojbnenesqngf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552403.6857963-119-213779726930759/AnsiballZ_copy.py'
Jan 27 22:20:04 compute-0 sudo[47427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:04 compute-0 python3.9[47429]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552403.6857963-119-213779726930759/.source.conf follow=False _original_basename=registries.conf.j2 checksum=1f3eae670902d81b6898b401f0bbba899d0240bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:20:04 compute-0 sudo[47427]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:05 compute-0 sudo[47579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eulhiyeogqbgxugqpsowynwliobsvvfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552404.8681836-135-223210183242268/AnsiballZ_ini_file.py'
Jan 27 22:20:05 compute-0 sudo[47579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:05 compute-0 python3.9[47581]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:20:05 compute-0 sudo[47579]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:05 compute-0 sudo[47731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojomemnbhbkaravmslmlcgmcssotujjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552405.605141-135-237775815169069/AnsiballZ_ini_file.py'
Jan 27 22:20:05 compute-0 sudo[47731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:06 compute-0 python3.9[47733]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:20:06 compute-0 sudo[47731]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:06 compute-0 sudo[47883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikicruqrygopdjifunxptyocqozwwyop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552406.174573-135-64545948197335/AnsiballZ_ini_file.py'
Jan 27 22:20:06 compute-0 sudo[47883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:06 compute-0 python3.9[47885]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:20:06 compute-0 sudo[47883]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:07 compute-0 sudo[48035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bodoclmsqgvigfqvjodcxtsyfiqwbxpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552406.7562954-135-24629977604267/AnsiballZ_ini_file.py'
Jan 27 22:20:07 compute-0 sudo[48035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:07 compute-0 python3.9[48037]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:20:07 compute-0 sudo[48035]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:08 compute-0 python3.9[48187]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:20:08 compute-0 sudo[48339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvivpjhqyurfulggdpgzfbrthhjvuusf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552408.3030653-175-237843474036959/AnsiballZ_dnf.py'
Jan 27 22:20:08 compute-0 sudo[48339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:08 compute-0 python3.9[48341]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:20:10 compute-0 sudo[48339]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:10 compute-0 sudo[48492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oewvvqbvushmoubjgaioarighrodhdvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552410.1676557-183-45204273100356/AnsiballZ_dnf.py'
Jan 27 22:20:10 compute-0 sudo[48492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:10 compute-0 python3.9[48494]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:20:13 compute-0 sudo[48492]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:14 compute-0 sudo[48652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eunrdrscjhxfpjnzkttjzxelhfeiklxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552413.8359356-193-10124901855318/AnsiballZ_dnf.py'
Jan 27 22:20:14 compute-0 sudo[48652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:14 compute-0 python3.9[48654]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:20:15 compute-0 sudo[48652]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:16 compute-0 sudo[48805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pctcmtrdimpkrelwkgvvpmxnewapcfwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552415.7573519-202-18254293146265/AnsiballZ_dnf.py'
Jan 27 22:20:16 compute-0 sudo[48805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:16 compute-0 python3.9[48807]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:20:17 compute-0 sudo[48805]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:18 compute-0 sudo[48958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iusvemlnedsacmmurbrfdaanzmfpcwia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552417.8828862-213-14937832287215/AnsiballZ_dnf.py'
Jan 27 22:20:18 compute-0 sudo[48958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:18 compute-0 python3.9[48960]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:20:19 compute-0 sudo[48958]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:20 compute-0 sudo[49114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikfcckjejjvdaoxkjnxsrhfdqzdefuyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552419.9041545-221-206037187708228/AnsiballZ_dnf.py'
Jan 27 22:20:20 compute-0 sudo[49114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:20 compute-0 python3.9[49116]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:20:22 compute-0 sudo[49114]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:23 compute-0 sudo[49282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptwinhgandeehslogockhthakyxlzbqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552422.8570347-230-135493230566052/AnsiballZ_dnf.py'
Jan 27 22:20:23 compute-0 sudo[49282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:23 compute-0 python3.9[49284]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:20:24 compute-0 sudo[49282]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:25 compute-0 sudo[49435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flsncvpegnndojqrsexvqzkmgxtlywdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552424.855418-239-187102017690308/AnsiballZ_dnf.py'
Jan 27 22:20:25 compute-0 sudo[49435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:25 compute-0 python3.9[49437]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:20:40 compute-0 sudo[49435]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:40 compute-0 sudo[49778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mghhaotvhnrghdlktflnyjfkurnvjcyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552440.5415184-248-244035130873364/AnsiballZ_dnf.py'
Jan 27 22:20:40 compute-0 sudo[49778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:41 compute-0 python3.9[49780]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:20:42 compute-0 sudo[49778]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:42 compute-0 sudo[49934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezfswjssypbiroaphbdbcdngesfowtfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552442.551861-258-20165477071301/AnsiballZ_dnf.py'
Jan 27 22:20:42 compute-0 sudo[49934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:43 compute-0 python3.9[49936]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:20:44 compute-0 sudo[49934]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:45 compute-0 sudo[50091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrunrlnyoagtxfycegzrooflewaqztgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552445.1382155-269-231271994023980/AnsiballZ_file.py'
Jan 27 22:20:45 compute-0 sudo[50091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:45 compute-0 python3.9[50093]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:20:45 compute-0 sudo[50091]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:45 compute-0 sudo[50266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmdgthnshcuinycpreqbmtshorgyfhkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552445.7215118-277-33572516471703/AnsiballZ_stat.py'
Jan 27 22:20:46 compute-0 sudo[50266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:46 compute-0 python3.9[50268]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:20:46 compute-0 sudo[50266]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:46 compute-0 sudo[50389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpeawrwgzehzxlfxgazopllholagaeaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552445.7215118-277-33572516471703/AnsiballZ_copy.py'
Jan 27 22:20:46 compute-0 sudo[50389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:46 compute-0 python3.9[50391]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769552445.7215118-277-33572516471703/.source.json _original_basename=.ju1odybj follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:20:46 compute-0 sudo[50389]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:47 compute-0 sudo[50541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnzdhkbhxcbuxgcstnahyxecozkbvhqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552447.027537-295-33803553750030/AnsiballZ_podman_image.py'
Jan 27 22:20:47 compute-0 sudo[50541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:47 compute-0 python3.9[50543]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 22:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1362848574-lower\x2dmapped.mount: Deactivated successfully.
Jan 27 22:20:53 compute-0 podman[50556]: 2026-01-27 22:20:53.192780032 +0000 UTC m=+5.414998443 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 22:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:20:53 compute-0 sudo[50541]: pam_unix(sudo:session): session closed for user root
Jan 27 22:20:53 compute-0 sudo[50852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybnwsioczlabzpdkksgukdyuyqcskoix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552453.6850414-306-195566404390257/AnsiballZ_podman_image.py'
Jan 27 22:20:53 compute-0 sudo[50852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:20:54 compute-0 python3.9[50854]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 22:21:02 compute-0 podman[50866]: 2026-01-27 22:21:02.64421277 +0000 UTC m=+8.461285115 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 22:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:02 compute-0 sudo[50852]: pam_unix(sudo:session): session closed for user root
Jan 27 22:21:03 compute-0 sudo[51159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kesoxocqvruqxzcupoxrimnkmqkfyrxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552463.1046178-316-26719914811501/AnsiballZ_podman_image.py'
Jan 27 22:21:03 compute-0 sudo[51159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:21:03 compute-0 python3.9[51161]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 22:21:12 compute-0 podman[51174]: 2026-01-27 22:21:12.165155911 +0000 UTC m=+8.519801355 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 22:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:12 compute-0 sudo[51159]: pam_unix(sudo:session): session closed for user root
Jan 27 22:21:12 compute-0 sudo[51435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecsnrenksxrqyhehqtiudllaksmgibrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552472.616818-327-205222181923800/AnsiballZ_podman_image.py'
Jan 27 22:21:12 compute-0 sudo[51435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:21:13 compute-0 python3.9[51437]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 22:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:25 compute-0 podman[51449]: 2026-01-27 22:21:25.092096206 +0000 UTC m=+11.898811125 image pull 68a60f9093568ce7a1c5b4524fb1e8f03692d56fcec899fd30bbb31f7cc46992 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 27 22:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:25 compute-0 sudo[51435]: pam_unix(sudo:session): session closed for user root
Jan 27 22:21:25 compute-0 sudo[51768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujqeazwmjcbjripeaqlsgggcsymraxhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552485.4547446-327-17544267169168/AnsiballZ_podman_image.py'
Jan 27 22:21:25 compute-0 sudo[51768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:21:25 compute-0 python3.9[51770]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 22:21:27 compute-0 podman[51783]: 2026-01-27 22:21:27.447188811 +0000 UTC m=+1.451193107 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 27 22:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:27 compute-0 sudo[51768]: pam_unix(sudo:session): session closed for user root
Jan 27 22:21:28 compute-0 sudo[52056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwkbpnfnaiiwkrqeecsllekgjedfnpvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552487.9597657-343-193919019425857/AnsiballZ_podman_image.py'
Jan 27 22:21:28 compute-0 sudo[52056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:21:28 compute-0 python3.9[52058]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 22:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:31 compute-0 podman[52071]: 2026-01-27 22:21:31.574977922 +0000 UTC m=+3.024690939 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 27 22:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:31 compute-0 sudo[52056]: pam_unix(sudo:session): session closed for user root
Jan 27 22:21:32 compute-0 sudo[52328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iufupvsqiwvmlgfhdhnwnzzzktasrrwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552491.994813-343-240947689396598/AnsiballZ_podman_image.py'
Jan 27 22:21:32 compute-0 sudo[52328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:21:32 compute-0 python3.9[52330]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 27 22:21:39 compute-0 podman[52342]: 2026-01-27 22:21:39.454311437 +0000 UTC m=+6.887818204 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 27 22:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:21:39 compute-0 sudo[52328]: pam_unix(sudo:session): session closed for user root
Jan 27 22:21:40 compute-0 sshd-session[45541]: Connection closed by 192.168.122.30 port 55866
Jan 27 22:21:40 compute-0 sshd-session[45538]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:21:40 compute-0 systemd-logind[789]: Session 10 logged out. Waiting for processes to exit.
Jan 27 22:21:40 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 27 22:21:40 compute-0 systemd[1]: session-10.scope: Consumed 2min 22.357s CPU time.
Jan 27 22:21:40 compute-0 systemd-logind[789]: Removed session 10.
Jan 27 22:21:45 compute-0 sshd-session[52603]: Accepted publickey for zuul from 192.168.122.30 port 56768 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:21:45 compute-0 systemd-logind[789]: New session 11 of user zuul.
Jan 27 22:21:45 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 27 22:21:45 compute-0 sshd-session[52603]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:21:46 compute-0 python3.9[52756]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:21:47 compute-0 sudo[52910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xozqqobypzktnkcmvwsgsupgglvknzmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552507.4471161-31-227237654847325/AnsiballZ_getent.py'
Jan 27 22:21:47 compute-0 sudo[52910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:21:48 compute-0 python3.9[52912]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 27 22:21:48 compute-0 sudo[52910]: pam_unix(sudo:session): session closed for user root
Jan 27 22:21:48 compute-0 sudo[53063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inrzylaiuyhzteeffszuagjatwpdbrvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552508.3411732-39-141745490989592/AnsiballZ_group.py'
Jan 27 22:21:48 compute-0 sudo[53063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:21:48 compute-0 python3.9[53065]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 22:21:49 compute-0 groupadd[53066]: group added to /etc/group: name=openvswitch, GID=42476
Jan 27 22:21:49 compute-0 groupadd[53066]: group added to /etc/gshadow: name=openvswitch
Jan 27 22:21:49 compute-0 groupadd[53066]: new group: name=openvswitch, GID=42476
Jan 27 22:21:49 compute-0 sudo[53063]: pam_unix(sudo:session): session closed for user root
Jan 27 22:21:49 compute-0 sudo[53221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfmnthvpomyhxkdmbldolmnfckxpfzmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552509.266523-47-229408679695227/AnsiballZ_user.py'
Jan 27 22:21:49 compute-0 sudo[53221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:21:50 compute-0 python3.9[53223]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 22:21:50 compute-0 useradd[53225]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 22:21:50 compute-0 useradd[53225]: add 'openvswitch' to group 'hugetlbfs'
Jan 27 22:21:50 compute-0 useradd[53225]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 27 22:21:50 compute-0 sudo[53221]: pam_unix(sudo:session): session closed for user root
Jan 27 22:21:50 compute-0 sudo[53381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exsprkmlkwnupgepjipmyguqmcrqdfdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552510.4149709-57-131512791578464/AnsiballZ_setup.py'
Jan 27 22:21:50 compute-0 sudo[53381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:21:50 compute-0 python3.9[53383]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:21:51 compute-0 sudo[53381]: pam_unix(sudo:session): session closed for user root
Jan 27 22:21:51 compute-0 sudo[53465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvclqokybslenbmdvprwaggpqugsaezr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552510.4149709-57-131512791578464/AnsiballZ_dnf.py'
Jan 27 22:21:51 compute-0 sudo[53465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:21:52 compute-0 python3.9[53467]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:21:53 compute-0 sudo[53465]: pam_unix(sudo:session): session closed for user root
Jan 27 22:21:54 compute-0 sudo[53626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pucwvetgyfejgzcnpijtbievrcvqdwda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552513.8011422-71-207273075848488/AnsiballZ_dnf.py'
Jan 27 22:21:54 compute-0 sudo[53626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:21:54 compute-0 python3.9[53628]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:22:05 compute-0 kernel: SELinux:  Converting 2738 SID table entries...
Jan 27 22:22:05 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 22:22:05 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 22:22:05 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 22:22:05 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 22:22:05 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 22:22:05 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 22:22:05 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 22:22:05 compute-0 groupadd[53651]: group added to /etc/group: name=unbound, GID=994
Jan 27 22:22:05 compute-0 groupadd[53651]: group added to /etc/gshadow: name=unbound
Jan 27 22:22:05 compute-0 groupadd[53651]: new group: name=unbound, GID=994
Jan 27 22:22:05 compute-0 useradd[53658]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 27 22:22:05 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 27 22:22:05 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 27 22:22:07 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 22:22:07 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 22:22:07 compute-0 systemd[1]: Reloading.
Jan 27 22:22:07 compute-0 systemd-rc-local-generator[54153]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:22:07 compute-0 systemd-sysv-generator[54156]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:22:07 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 22:22:07 compute-0 sudo[53626]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 22:22:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 22:22:07 compute-0 systemd[1]: run-r155cd673c51e454e8ac9b3c44fd98e66.service: Deactivated successfully.
Jan 27 22:22:08 compute-0 sudo[54724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqdbctluuupxxnwmpnjwngvpoyusxqwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552528.0563056-79-115817902468463/AnsiballZ_systemd.py'
Jan 27 22:22:08 compute-0 sudo[54724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:09 compute-0 python3.9[54726]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 22:22:09 compute-0 systemd[1]: Reloading.
Jan 27 22:22:09 compute-0 systemd-rc-local-generator[54756]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:22:09 compute-0 systemd-sysv-generator[54761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:22:09 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 27 22:22:09 compute-0 chown[54768]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 27 22:22:09 compute-0 ovs-ctl[54773]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 27 22:22:09 compute-0 ovs-ctl[54773]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 27 22:22:09 compute-0 ovs-ctl[54773]: Starting ovsdb-server [  OK  ]
Jan 27 22:22:09 compute-0 ovs-vsctl[54822]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 27 22:22:09 compute-0 ovs-vsctl[54842]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"e88f80e1-ee63-4bdc-95c3-ad473efb7428\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 27 22:22:09 compute-0 ovs-ctl[54773]: Configuring Open vSwitch system IDs [  OK  ]
Jan 27 22:22:09 compute-0 ovs-ctl[54773]: Enabling remote OVSDB managers [  OK  ]
Jan 27 22:22:09 compute-0 ovs-vsctl[54848]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 27 22:22:09 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 27 22:22:09 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 27 22:22:09 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 27 22:22:09 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 27 22:22:09 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 27 22:22:09 compute-0 ovs-ctl[54892]: Inserting openvswitch module [  OK  ]
Jan 27 22:22:09 compute-0 ovs-ctl[54861]: Starting ovs-vswitchd [  OK  ]
Jan 27 22:22:09 compute-0 ovs-vsctl[54909]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 27 22:22:09 compute-0 ovs-ctl[54861]: Enabling remote OVSDB managers [  OK  ]
Jan 27 22:22:09 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 27 22:22:09 compute-0 systemd[1]: Starting Open vSwitch...
Jan 27 22:22:09 compute-0 systemd[1]: Finished Open vSwitch.
Jan 27 22:22:09 compute-0 sudo[54724]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:10 compute-0 python3.9[55062]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:22:11 compute-0 sudo[55212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqldhdqpholrmicyqsomuclvndstndpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552531.0981948-97-253567208918775/AnsiballZ_sefcontext.py'
Jan 27 22:22:11 compute-0 sudo[55212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:11 compute-0 python3.9[55214]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 27 22:22:12 compute-0 kernel: SELinux:  Converting 2752 SID table entries...
Jan 27 22:22:12 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 22:22:12 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 22:22:12 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 22:22:12 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 22:22:12 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 22:22:12 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 22:22:12 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 22:22:13 compute-0 sudo[55212]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:14 compute-0 python3.9[55369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:22:14 compute-0 sudo[55525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyrmhmnodmnxnaeqdqcbkfqvjoolnldn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552534.4895506-115-16206105125754/AnsiballZ_dnf.py'
Jan 27 22:22:14 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 27 22:22:14 compute-0 sudo[55525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:15 compute-0 python3.9[55527]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:22:16 compute-0 sudo[55525]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:17 compute-0 sudo[55678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gofwnildhxsahtyifeuobbghivikrpip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552536.5169592-123-96401842392073/AnsiballZ_command.py'
Jan 27 22:22:17 compute-0 sudo[55678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:17 compute-0 python3.9[55680]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:22:18 compute-0 sudo[55678]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:18 compute-0 sudo[55965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaeyxsuoepwaqrcshezbtvcikyrvsyzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552538.2814953-131-216321321372101/AnsiballZ_file.py'
Jan 27 22:22:18 compute-0 sudo[55965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:18 compute-0 python3.9[55967]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 27 22:22:18 compute-0 sudo[55965]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:19 compute-0 python3.9[56117]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:22:20 compute-0 sudo[56269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veczdpemcaqamkqfijpwmkqgiipqkwbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552539.9101846-147-193341679837309/AnsiballZ_dnf.py'
Jan 27 22:22:20 compute-0 sudo[56269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:20 compute-0 python3.9[56271]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:22:22 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 22:22:22 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 22:22:22 compute-0 systemd[1]: Reloading.
Jan 27 22:22:22 compute-0 systemd-rc-local-generator[56305]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:22:22 compute-0 systemd-sysv-generator[56312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:22:22 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 22:22:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 22:22:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 22:22:22 compute-0 systemd[1]: run-rf95a1d44aef8468a8da6007f100beca0.service: Deactivated successfully.
Jan 27 22:22:22 compute-0 sudo[56269]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:23 compute-0 sudo[56586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fflhwuonimngexllklunivvnopistsju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552542.9840024-155-223024210528998/AnsiballZ_systemd.py'
Jan 27 22:22:23 compute-0 sudo[56586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:23 compute-0 python3.9[56588]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:22:23 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 27 22:22:23 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 27 22:22:23 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 27 22:22:23 compute-0 systemd[1]: Stopping Network Manager...
Jan 27 22:22:23 compute-0 NetworkManager[7195]: <info>  [1769552543.5447] caught SIGTERM, shutting down normally.
Jan 27 22:22:23 compute-0 NetworkManager[7195]: <info>  [1769552543.5463] dhcp4 (eth0): canceled DHCP transaction
Jan 27 22:22:23 compute-0 NetworkManager[7195]: <info>  [1769552543.5463] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 22:22:23 compute-0 NetworkManager[7195]: <info>  [1769552543.5463] dhcp4 (eth0): state changed no lease
Jan 27 22:22:23 compute-0 NetworkManager[7195]: <info>  [1769552543.5466] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 22:22:23 compute-0 NetworkManager[7195]: <info>  [1769552543.5533] exiting (success)
Jan 27 22:22:23 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 22:22:23 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 22:22:23 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 27 22:22:23 compute-0 systemd[1]: Stopped Network Manager.
Jan 27 22:22:23 compute-0 systemd[1]: NetworkManager.service: Consumed 19.436s CPU time, 4.1M memory peak, read 0B from disk, written 36.5K to disk.
Jan 27 22:22:23 compute-0 systemd[1]: Starting Network Manager...
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.6207] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:b296a529-9762-4dd6-b2a2-416e3ccb95c7)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.6209] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.6275] manager[0x56356c720000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 27 22:22:23 compute-0 systemd[1]: Starting Hostname Service...
Jan 27 22:22:23 compute-0 systemd[1]: Started Hostname Service.
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.6986] hostname: hostname: using hostnamed
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.6990] hostname: static hostname changed from (none) to "compute-0"
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.6996] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7003] manager[0x56356c720000]: rfkill: Wi-Fi hardware radio set enabled
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7003] manager[0x56356c720000]: rfkill: WWAN hardware radio set enabled
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7037] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7052] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7054] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7055] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7055] manager: Networking is enabled by state file
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7059] settings: Loaded settings plugin: keyfile (internal)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7066] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7107] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7120] dhcp: init: Using DHCP client 'internal'
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7124] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7132] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7140] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7152] device (lo): Activation: starting connection 'lo' (19c0906f-7bf5-4e0a-9fd9-d1d6accc761b)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7164] device (eth0): carrier: link connected
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7171] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7181] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7182] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7193] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7205] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7215] device (eth1): carrier: link connected
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7222] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7230] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (60c54900-5f38-5c39-a049-82583fdf5947) (indicated)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7231] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7240] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7252] device (eth1): Activation: starting connection 'ci-private-network' (60c54900-5f38-5c39-a049-82583fdf5947)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7262] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 27 22:22:23 compute-0 systemd[1]: Started Network Manager.
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7275] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7281] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7285] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7290] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7296] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7301] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7305] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7311] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7324] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7331] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7348] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7377] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7392] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7396] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7405] device (lo): Activation: successful, device activated.
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7417] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7422] dhcp4 (eth0): state changed new lease, address=38.102.83.82
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7428] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7434] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 27 22:22:23 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7438] device (eth1): Activation: successful, device activated.
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7456] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7537] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7552] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7554] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7558] manager: NetworkManager state is now CONNECTED_SITE
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7561] device (eth0): Activation: successful, device activated.
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7566] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 27 22:22:23 compute-0 NetworkManager[56600]: <info>  [1769552543.7569] manager: startup complete
Jan 27 22:22:23 compute-0 sudo[56586]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:23 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 27 22:22:24 compute-0 sudo[56812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkldkarniyumqujeqdbpxwxfkprqwrtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552543.9186451-163-259276727727089/AnsiballZ_dnf.py'
Jan 27 22:22:24 compute-0 sudo[56812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:24 compute-0 python3.9[56814]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:22:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 22:22:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 22:22:28 compute-0 systemd[1]: Reloading.
Jan 27 22:22:28 compute-0 systemd-rc-local-generator[56868]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:22:28 compute-0 systemd-sysv-generator[56871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:22:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 22:22:29 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 22:22:29 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 22:22:29 compute-0 systemd[1]: run-rf16f342f93444ce69ec8bce9681e4854.service: Deactivated successfully.
Jan 27 22:22:29 compute-0 sudo[56812]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:30 compute-0 sudo[57271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fumzunrmtachieabgeosgqnnhhmrpdeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552549.8060393-175-186426555408439/AnsiballZ_stat.py'
Jan 27 22:22:30 compute-0 sudo[57271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:30 compute-0 python3.9[57273]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:22:30 compute-0 sudo[57271]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:30 compute-0 sudo[57423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovvxdduefhtzvrjmijlkjtwurlgcvhio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552550.3715472-184-198567949826111/AnsiballZ_ini_file.py'
Jan 27 22:22:30 compute-0 sudo[57423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:30 compute-0 python3.9[57425]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:22:30 compute-0 sudo[57423]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:31 compute-0 sudo[57577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iigjqkylvbadehqjohaplufodsniedvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552551.161751-194-21291206202206/AnsiballZ_ini_file.py'
Jan 27 22:22:31 compute-0 sudo[57577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:31 compute-0 python3.9[57579]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:22:31 compute-0 sudo[57577]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:32 compute-0 sudo[57729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtsuemkkkxpvdtnfzykmjqhbdmnsxfys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552551.7136023-194-109860476155935/AnsiballZ_ini_file.py'
Jan 27 22:22:32 compute-0 sudo[57729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:32 compute-0 python3.9[57731]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:22:32 compute-0 sudo[57729]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:32 compute-0 sudo[57881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udkcyuuipttadawnnzcdeiyqgdriofso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552552.4017718-209-149287531366325/AnsiballZ_ini_file.py'
Jan 27 22:22:32 compute-0 sudo[57881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:32 compute-0 python3.9[57883]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:22:32 compute-0 sudo[57881]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:33 compute-0 sudo[58033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmfxdllwrdscmzmjzckibfrtilarkjsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552552.9386005-209-174022039280443/AnsiballZ_ini_file.py'
Jan 27 22:22:33 compute-0 sudo[58033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:33 compute-0 python3.9[58035]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:22:33 compute-0 sudo[58033]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:33 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 22:22:34 compute-0 sudo[58185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjomuzqdqjxvbxrwcwdzuknnkfexduox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552553.7091093-224-59651135875660/AnsiballZ_stat.py'
Jan 27 22:22:34 compute-0 sudo[58185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:34 compute-0 python3.9[58187]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:22:34 compute-0 sudo[58185]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:34 compute-0 sudo[58308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmgtskhrmusmqbceiwwjlvnaujywgdlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552553.7091093-224-59651135875660/AnsiballZ_copy.py'
Jan 27 22:22:34 compute-0 sudo[58308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:34 compute-0 python3.9[58310]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552553.7091093-224-59651135875660/.source _original_basename=.rm_c7u8j follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:22:34 compute-0 sudo[58308]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:35 compute-0 sudo[58460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqhdxqofmxtjmsdvtfpofzgoqexezqym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552555.100193-239-2650464270224/AnsiballZ_file.py'
Jan 27 22:22:35 compute-0 sudo[58460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:35 compute-0 python3.9[58462]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:22:35 compute-0 sudo[58460]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:36 compute-0 sudo[58612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paemmiyizynrvwbpmfpxwihjehjawsjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552555.6914308-247-213335369432778/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 27 22:22:36 compute-0 sudo[58612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:36 compute-0 python3.9[58614]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 27 22:22:36 compute-0 sudo[58612]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:36 compute-0 sudo[58764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvbntsgyphhcbrtxcmcezyiwxsznbejp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552556.5227425-256-50327081825609/AnsiballZ_file.py'
Jan 27 22:22:36 compute-0 sudo[58764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:36 compute-0 python3.9[58766]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:22:37 compute-0 sudo[58764]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:37 compute-0 sudo[58916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpjpnbttaqnxcypkxgddufnytqksdcvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552557.275942-266-143749220656545/AnsiballZ_stat.py'
Jan 27 22:22:37 compute-0 sudo[58916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:37 compute-0 sudo[58916]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:38 compute-0 sudo[59039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwfyrormdjgwlvwndhzfsdbjrfjfwbjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552557.275942-266-143749220656545/AnsiballZ_copy.py'
Jan 27 22:22:38 compute-0 sudo[59039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:38 compute-0 sudo[59039]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:38 compute-0 sudo[59191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayutpotjrlloatyemnvibbvlrbnpgtpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552558.4085288-281-141557480114920/AnsiballZ_slurp.py'
Jan 27 22:22:38 compute-0 sudo[59191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:39 compute-0 python3.9[59193]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 27 22:22:39 compute-0 sudo[59191]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:39 compute-0 sudo[59366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhladztgsapowamjukdpuxmtolhtovhv ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552559.2601871-290-142393884852425/async_wrapper.py j588779304184 300 /home/zuul/.ansible/tmp/ansible-tmp-1769552559.2601871-290-142393884852425/AnsiballZ_edpm_os_net_config.py _'
Jan 27 22:22:39 compute-0 sudo[59366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:40 compute-0 ansible-async_wrapper.py[59368]: Invoked with j588779304184 300 /home/zuul/.ansible/tmp/ansible-tmp-1769552559.2601871-290-142393884852425/AnsiballZ_edpm_os_net_config.py _
Jan 27 22:22:40 compute-0 ansible-async_wrapper.py[59371]: Starting module and watcher
Jan 27 22:22:40 compute-0 ansible-async_wrapper.py[59371]: Start watching 59372 (300)
Jan 27 22:22:40 compute-0 ansible-async_wrapper.py[59372]: Start module (59372)
Jan 27 22:22:40 compute-0 ansible-async_wrapper.py[59368]: Return async_wrapper task started.
Jan 27 22:22:40 compute-0 sudo[59366]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:40 compute-0 python3.9[59373]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 27 22:22:40 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 27 22:22:40 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 27 22:22:40 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 27 22:22:40 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 27 22:22:40 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1152] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1166] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1642] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1643] audit: op="connection-add" uuid="11976b7d-0e40-4f01-b4c8-afb33b0dacca" name="br-ex-br" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1656] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1657] audit: op="connection-add" uuid="1692d696-eda4-4fe8-b715-bde1760423c2" name="br-ex-port" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1667] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1668] audit: op="connection-add" uuid="844adc1a-ff92-4062-baef-16f651fefe1b" name="eth1-port" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1678] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1679] audit: op="connection-add" uuid="485f906f-8c57-4902-8507-5ea66c6821b7" name="vlan20-port" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1690] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1691] audit: op="connection-add" uuid="0fda64a4-e296-4dc9-a994-2c46769719e8" name="vlan21-port" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1700] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1701] audit: op="connection-add" uuid="f7f2f28c-125d-49b9-85fc-b2d666aaf46d" name="vlan22-port" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1718] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,802-3-ethernet.mtu" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1734] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1735] audit: op="connection-add" uuid="ff2faf3f-2ce8-4ebf-84de-c5be7bf7aedb" name="br-ex-if" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1772] audit: op="connection-update" uuid="60c54900-5f38-5c39-a049-82583fdf5947" name="ci-private-network" args="connection.port-type,connection.master,connection.slave-type,connection.controller,connection.timestamp,ipv4.never-default,ipv4.routes,ipv4.method,ipv4.dns,ipv4.addresses,ipv4.routing-rules,ipv6.routes,ipv6.addr-gen-mode,ipv6.method,ipv6.dns,ipv6.addresses,ipv6.routing-rules,ovs-interface.type,ovs-external-ids.data" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1788] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1789] audit: op="connection-add" uuid="c9a5a919-1476-4894-b9d8-c9d7fb2c3d7b" name="vlan20-if" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1803] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1804] audit: op="connection-add" uuid="189592ba-1a67-46d1-85cf-792255478cdf" name="vlan21-if" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1819] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1821] audit: op="connection-add" uuid="153a980b-cb74-469e-b037-250eee3c231f" name="vlan22-if" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1831] audit: op="connection-delete" uuid="cd54289b-cdde-3d56-a906-8e89599c3435" name="Wired connection 1" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1846] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <warn>  [1769552562.1849] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1855] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1860] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (11976b7d-0e40-4f01-b4c8-afb33b0dacca)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1860] audit: op="connection-activate" uuid="11976b7d-0e40-4f01-b4c8-afb33b0dacca" name="br-ex-br" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1862] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <warn>  [1769552562.1863] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1868] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1872] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (1692d696-eda4-4fe8-b715-bde1760423c2)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1874] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <warn>  [1769552562.1876] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1881] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1886] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (844adc1a-ff92-4062-baef-16f651fefe1b)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1888] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <warn>  [1769552562.1890] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1895] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1900] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (485f906f-8c57-4902-8507-5ea66c6821b7)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1902] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <warn>  [1769552562.1903] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1908] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1912] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (0fda64a4-e296-4dc9-a994-2c46769719e8)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1914] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <warn>  [1769552562.1916] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1921] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1924] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (f7f2f28c-125d-49b9-85fc-b2d666aaf46d)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1925] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1927] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1929] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1934] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <warn>  [1769552562.1935] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1938] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1941] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ff2faf3f-2ce8-4ebf-84de-c5be7bf7aedb)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1942] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1944] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1946] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1947] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1948] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1956] device (eth1): disconnecting for new activation request.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1957] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1960] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1961] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1962] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1965] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <warn>  [1769552562.1965] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1969] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1972] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c9a5a919-1476-4894-b9d8-c9d7fb2c3d7b)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1973] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1975] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1976] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1978] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1980] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <warn>  [1769552562.1981] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1984] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1988] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (189592ba-1a67-46d1-85cf-792255478cdf)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1988] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1991] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1992] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1993] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1995] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <warn>  [1769552562.1996] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.1999] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2003] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (153a980b-cb74-469e-b037-250eee3c231f)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2004] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2006] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2008] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2009] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2010] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2021] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2023] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2026] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2028] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2033] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2037] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2040] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2043] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2053] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2058] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2061] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2063] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2065] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 kernel: Timeout policy base is empty
Jan 27 22:22:42 compute-0 systemd-udevd[59380]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2069] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2072] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2075] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2076] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2080] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2084] dhcp4 (eth0): canceled DHCP transaction
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2084] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2085] dhcp4 (eth0): state changed no lease
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2086] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2094] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2097] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59374 uid=0 result="fail" reason="Device is not activated"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2127] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2133] device (eth1): disconnecting for new activation request.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2133] audit: op="connection-activate" uuid="60c54900-5f38-5c39-a049-82583fdf5947" name="ci-private-network" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2135] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2139] dhcp4 (eth0): state changed new lease, address=38.102.83.82
Jan 27 22:22:42 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2175] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2221] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59374 uid=0 result="success"
Jan 27 22:22:42 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2281] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 27 22:22:42 compute-0 kernel: br-ex: entered promiscuous mode
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2362] device (eth1): Activation: starting connection 'ci-private-network' (60c54900-5f38-5c39-a049-82583fdf5947)
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2366] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2375] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2378] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2383] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2386] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2394] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2395] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2396] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2397] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2398] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2407] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2413] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2416] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2419] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2422] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2425] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 kernel: vlan22: entered promiscuous mode
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2428] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2431] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2434] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2436] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2439] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 27 22:22:42 compute-0 systemd-udevd[59378]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2443] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2453] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2465] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2480] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 kernel: vlan20: entered promiscuous mode
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2493] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2494] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2498] device (eth1): Activation: successful, device activated.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2531] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2533] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 kernel: vlan21: entered promiscuous mode
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2539] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 22:22:42 compute-0 systemd-udevd[59379]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 22:22:42 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2588] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2605] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2613] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2628] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2643] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2644] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2647] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2651] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2665] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2675] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2676] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2681] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2716] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2718] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 27 22:22:42 compute-0 NetworkManager[56600]: <info>  [1769552562.2722] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 27 22:22:43 compute-0 NetworkManager[56600]: <info>  [1769552563.3783] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59374 uid=0 result="success"
Jan 27 22:22:43 compute-0 NetworkManager[56600]: <info>  [1769552563.5188] checkpoint[0x56356c6f6950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 27 22:22:43 compute-0 NetworkManager[56600]: <info>  [1769552563.5192] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59374 uid=0 result="success"
Jan 27 22:22:43 compute-0 sudo[59707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmzpddghajcvagqexgntojvawpmnoghu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552563.2394936-290-194822858609064/AnsiballZ_async_status.py'
Jan 27 22:22:43 compute-0 sudo[59707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:43 compute-0 NetworkManager[56600]: <info>  [1769552563.8126] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59374 uid=0 result="success"
Jan 27 22:22:43 compute-0 NetworkManager[56600]: <info>  [1769552563.8138] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59374 uid=0 result="success"
Jan 27 22:22:43 compute-0 python3.9[59709]: ansible-ansible.legacy.async_status Invoked with jid=j588779304184.59368 mode=status _async_dir=/root/.ansible_async
Jan 27 22:22:43 compute-0 sudo[59707]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:43 compute-0 NetworkManager[56600]: <info>  [1769552563.9957] audit: op="networking-control" arg="global-dns-configuration" pid=59374 uid=0 result="success"
Jan 27 22:22:43 compute-0 NetworkManager[56600]: <info>  [1769552563.9982] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 27 22:22:44 compute-0 NetworkManager[56600]: <info>  [1769552564.0009] audit: op="networking-control" arg="global-dns-configuration" pid=59374 uid=0 result="success"
Jan 27 22:22:44 compute-0 NetworkManager[56600]: <info>  [1769552564.0028] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59374 uid=0 result="success"
Jan 27 22:22:44 compute-0 NetworkManager[56600]: <info>  [1769552564.1364] checkpoint[0x56356c6f6a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 27 22:22:44 compute-0 NetworkManager[56600]: <info>  [1769552564.1367] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59374 uid=0 result="success"
Jan 27 22:22:44 compute-0 ansible-async_wrapper.py[59372]: Module complete (59372)
Jan 27 22:22:45 compute-0 ansible-async_wrapper.py[59371]: Done in kid B.
Jan 27 22:22:47 compute-0 sudo[59811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfdcurwyzjvgxambfstdioyxwugbzbgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552563.2394936-290-194822858609064/AnsiballZ_async_status.py'
Jan 27 22:22:47 compute-0 sudo[59811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:47 compute-0 python3.9[59813]: ansible-ansible.legacy.async_status Invoked with jid=j588779304184.59368 mode=status _async_dir=/root/.ansible_async
Jan 27 22:22:47 compute-0 sudo[59811]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:47 compute-0 sudo[59911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyifeqrjfgiobnxapqvkxokjnhgwhuhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552563.2394936-290-194822858609064/AnsiballZ_async_status.py'
Jan 27 22:22:47 compute-0 sudo[59911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:47 compute-0 python3.9[59913]: ansible-ansible.legacy.async_status Invoked with jid=j588779304184.59368 mode=cleanup _async_dir=/root/.ansible_async
Jan 27 22:22:47 compute-0 sudo[59911]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:48 compute-0 sudo[60063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfbkqcqhvfklrgrkjcabsvrbekouplgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552568.0838346-317-200307769557818/AnsiballZ_stat.py'
Jan 27 22:22:48 compute-0 sudo[60063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:48 compute-0 python3.9[60065]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:22:48 compute-0 sudo[60063]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:48 compute-0 sudo[60186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pekbgmncxrrzpcsmsfeyygszkgvozurk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552568.0838346-317-200307769557818/AnsiballZ_copy.py'
Jan 27 22:22:48 compute-0 sudo[60186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:49 compute-0 python3.9[60188]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552568.0838346-317-200307769557818/.source.returncode _original_basename=.mbu806ls follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:22:49 compute-0 sudo[60186]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:49 compute-0 sudo[60338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqvhdvjpuxggrmfehspveooffpkgdxxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552569.4123645-333-186821637585658/AnsiballZ_stat.py'
Jan 27 22:22:49 compute-0 sudo[60338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:49 compute-0 python3.9[60340]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:22:49 compute-0 sudo[60338]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:50 compute-0 sudo[60461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlyktqzzavpzwnelneohcrkcoklzzmzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552569.4123645-333-186821637585658/AnsiballZ_copy.py'
Jan 27 22:22:50 compute-0 sudo[60461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:50 compute-0 python3.9[60463]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552569.4123645-333-186821637585658/.source.cfg _original_basename=.__vsvg67 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:22:50 compute-0 sudo[60461]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:50 compute-0 sudo[60614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiajqpmrwoqompzszsfohssbhxzuyunw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552570.6484082-348-136661166408934/AnsiballZ_systemd.py'
Jan 27 22:22:50 compute-0 sudo[60614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:22:51 compute-0 python3.9[60616]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:22:51 compute-0 systemd[1]: Reloading Network Manager...
Jan 27 22:22:51 compute-0 NetworkManager[56600]: <info>  [1769552571.2497] audit: op="reload" arg="0" pid=60620 uid=0 result="success"
Jan 27 22:22:51 compute-0 NetworkManager[56600]: <info>  [1769552571.2504] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 27 22:22:51 compute-0 systemd[1]: Reloaded Network Manager.
Jan 27 22:22:51 compute-0 sudo[60614]: pam_unix(sudo:session): session closed for user root
Jan 27 22:22:51 compute-0 sshd-session[52606]: Connection closed by 192.168.122.30 port 56768
Jan 27 22:22:51 compute-0 sshd-session[52603]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:22:51 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 27 22:22:51 compute-0 systemd[1]: session-11.scope: Consumed 49.132s CPU time.
Jan 27 22:22:51 compute-0 systemd-logind[789]: Session 11 logged out. Waiting for processes to exit.
Jan 27 22:22:51 compute-0 systemd-logind[789]: Removed session 11.
Jan 27 22:22:53 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 27 22:22:57 compute-0 sshd-session[60653]: Accepted publickey for zuul from 192.168.122.30 port 42660 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:22:57 compute-0 systemd-logind[789]: New session 12 of user zuul.
Jan 27 22:22:57 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 27 22:22:57 compute-0 sshd-session[60653]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:22:58 compute-0 python3.9[60806]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:22:59 compute-0 python3.9[60960]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:23:00 compute-0 python3.9[61150]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:23:01 compute-0 sshd-session[60656]: Connection closed by 192.168.122.30 port 42660
Jan 27 22:23:01 compute-0 sshd-session[60653]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:23:01 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 27 22:23:01 compute-0 systemd[1]: session-12.scope: Consumed 2.313s CPU time.
Jan 27 22:23:01 compute-0 systemd-logind[789]: Session 12 logged out. Waiting for processes to exit.
Jan 27 22:23:01 compute-0 systemd-logind[789]: Removed session 12.
Jan 27 22:23:01 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 27 22:23:06 compute-0 sshd-session[61179]: Accepted publickey for zuul from 192.168.122.30 port 54562 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:23:06 compute-0 systemd-logind[789]: New session 13 of user zuul.
Jan 27 22:23:06 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 27 22:23:06 compute-0 sshd-session[61179]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:23:07 compute-0 python3.9[61332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:23:08 compute-0 python3.9[61486]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:23:09 compute-0 sudo[61640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpeoexmjdfqnuvgoysxwpeqxugaqpgih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552589.3501267-35-5336484390651/AnsiballZ_setup.py'
Jan 27 22:23:09 compute-0 sudo[61640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:09 compute-0 python3.9[61642]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:23:10 compute-0 sudo[61640]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:10 compute-0 sudo[61725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oroizheaawnnprotrgfvoklnqolzkwtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552589.3501267-35-5336484390651/AnsiballZ_dnf.py'
Jan 27 22:23:10 compute-0 sudo[61725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:10 compute-0 python3.9[61727]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:23:12 compute-0 sudo[61725]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:12 compute-0 sudo[61879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zogxwicjjtuhwtrpzcbruzmfqcwynkcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552592.2879033-47-175240911929381/AnsiballZ_setup.py'
Jan 27 22:23:12 compute-0 sudo[61879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:12 compute-0 python3.9[61881]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:23:13 compute-0 sudo[61879]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:13 compute-0 sudo[62071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwoqfnfoerwffwxdybelrpncjvhwrkxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552593.4934607-58-168650605617309/AnsiballZ_file.py'
Jan 27 22:23:13 compute-0 sudo[62071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:14 compute-0 python3.9[62073]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:23:14 compute-0 sudo[62071]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:14 compute-0 sudo[62223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqunowwqwidzwcxabdjrivyzudiqnmfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552594.2862532-66-212441392387596/AnsiballZ_command.py'
Jan 27 22:23:14 compute-0 sudo[62223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:14 compute-0 python3.9[62225]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:23:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:23:14 compute-0 sudo[62223]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:15 compute-0 sudo[62386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuchzietazvjesnrpabfyybxurwpiqnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552595.1211565-74-200450601550579/AnsiballZ_stat.py'
Jan 27 22:23:15 compute-0 sudo[62386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:15 compute-0 python3.9[62388]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:23:15 compute-0 sudo[62386]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:16 compute-0 sudo[62464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skwtdmoefopircjmgwizxvyyuxkbmyeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552595.1211565-74-200450601550579/AnsiballZ_file.py'
Jan 27 22:23:16 compute-0 sudo[62464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:16 compute-0 python3.9[62466]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:23:16 compute-0 sudo[62464]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:16 compute-0 sudo[62616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyoyleooyjupqhsfwxjusnbjznsbrkzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552596.4253-86-23260527780932/AnsiballZ_stat.py'
Jan 27 22:23:16 compute-0 sudo[62616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:16 compute-0 python3.9[62618]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:23:16 compute-0 sudo[62616]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:17 compute-0 sudo[62694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mspcmdypgxuumcksgpmpmtqbpcfjrbpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552596.4253-86-23260527780932/AnsiballZ_file.py'
Jan 27 22:23:17 compute-0 sudo[62694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:17 compute-0 python3.9[62696]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:23:17 compute-0 sudo[62694]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:18 compute-0 sudo[62846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-negitsnmlgdhmfdmfnucptqeyjtzkefk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552597.6427536-99-264385655952951/AnsiballZ_ini_file.py'
Jan 27 22:23:18 compute-0 sudo[62846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:18 compute-0 python3.9[62848]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:23:18 compute-0 sudo[62846]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:18 compute-0 sudo[62998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbtrboyyfabqziviryluqpfoemaelfrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552598.5055683-99-247745947049090/AnsiballZ_ini_file.py'
Jan 27 22:23:18 compute-0 sudo[62998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:18 compute-0 python3.9[63000]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:23:18 compute-0 sudo[62998]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:19 compute-0 sudo[63150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wogcqctvlhkpcapvowxhklnzajlamous ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552599.1107786-99-223101577527791/AnsiballZ_ini_file.py'
Jan 27 22:23:19 compute-0 sudo[63150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:19 compute-0 python3.9[63152]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:23:19 compute-0 sudo[63150]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:20 compute-0 sudo[63302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdzjfhgoejfczqsvgrfbvdakjcqvftej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552599.830509-99-238212009445652/AnsiballZ_ini_file.py'
Jan 27 22:23:20 compute-0 sudo[63302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:20 compute-0 python3.9[63304]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:23:20 compute-0 sudo[63302]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:20 compute-0 sudo[63454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcbbvsmmxhveixwhhyhlbgmnrmqfsrro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552600.5474896-130-217780511716852/AnsiballZ_dnf.py'
Jan 27 22:23:20 compute-0 sudo[63454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:21 compute-0 python3.9[63456]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:23:22 compute-0 sudo[63454]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:23 compute-0 sudo[63607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cofgejkanazdkmtltrkbhhxarglmccws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552602.6771448-141-74590397686353/AnsiballZ_setup.py'
Jan 27 22:23:23 compute-0 sudo[63607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:23 compute-0 python3.9[63609]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:23:23 compute-0 sudo[63607]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:23 compute-0 sudo[63761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxtywauswzmacjytcqzietawamamqdlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552603.4550924-149-135130141389665/AnsiballZ_stat.py'
Jan 27 22:23:23 compute-0 sudo[63761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:23 compute-0 python3.9[63763]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:23:23 compute-0 sudo[63761]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:24 compute-0 sudo[63913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lccfhehgwkqklkepeslicnuwezbgisfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552604.0766163-158-119524411747625/AnsiballZ_stat.py'
Jan 27 22:23:24 compute-0 sudo[63913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:24 compute-0 python3.9[63915]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:23:24 compute-0 sudo[63913]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:24 compute-0 sudo[64065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywnfihewablgvgwryffvdecpxmjesnnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552604.7573905-168-64731791677846/AnsiballZ_command.py'
Jan 27 22:23:24 compute-0 sudo[64065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:25 compute-0 python3.9[64067]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:23:25 compute-0 sudo[64065]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:25 compute-0 sudo[64218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqliryoradjvonihxzaqbdfkjccmhmbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552605.4040513-178-39729737290409/AnsiballZ_service_facts.py'
Jan 27 22:23:25 compute-0 sudo[64218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:26 compute-0 python3.9[64220]: ansible-service_facts Invoked
Jan 27 22:23:26 compute-0 network[64237]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 22:23:26 compute-0 network[64238]: 'network-scripts' will be removed from distribution in near future.
Jan 27 22:23:26 compute-0 network[64239]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 22:23:29 compute-0 sudo[64218]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:30 compute-0 sudo[64522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkiagfqunxxehstygktxkiltqvtbxrin ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769552610.1256804-193-176617381076973/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769552610.1256804-193-176617381076973/args'
Jan 27 22:23:30 compute-0 sudo[64522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:30 compute-0 sudo[64522]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:31 compute-0 sudo[64689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csedhtiejglrpniyznmqjgilqbiovhpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552610.8634388-204-196189138439382/AnsiballZ_dnf.py'
Jan 27 22:23:31 compute-0 sudo[64689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:31 compute-0 python3.9[64691]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:23:32 compute-0 sudo[64689]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:33 compute-0 sudo[64842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyijovlrugnkrsdguwiaeshhwtvqacfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552613.161486-217-4562684955636/AnsiballZ_package_facts.py'
Jan 27 22:23:33 compute-0 sudo[64842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:34 compute-0 python3.9[64844]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 27 22:23:34 compute-0 sudo[64842]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:35 compute-0 sudo[64994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huipqfawlzhodqhacnikgqdicyeqxhle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552614.7994-227-225313209147473/AnsiballZ_stat.py'
Jan 27 22:23:35 compute-0 sudo[64994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:35 compute-0 python3.9[64996]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:23:35 compute-0 sudo[64994]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:36 compute-0 sudo[65119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlaniwmgnwsvxlbiykuwdyfgixmhivtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552614.7994-227-225313209147473/AnsiballZ_copy.py'
Jan 27 22:23:36 compute-0 sudo[65119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:36 compute-0 python3.9[65121]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552614.7994-227-225313209147473/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:23:36 compute-0 sudo[65119]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:36 compute-0 sudo[65273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmjccsygsywgzghbudlitagflxrfcrrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552616.646713-242-88431645273580/AnsiballZ_stat.py'
Jan 27 22:23:36 compute-0 sudo[65273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:37 compute-0 python3.9[65275]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:23:37 compute-0 sudo[65273]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:37 compute-0 sudo[65398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prexbtfwgvgspdgvmtzmkdgdhkhngtwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552616.646713-242-88431645273580/AnsiballZ_copy.py'
Jan 27 22:23:37 compute-0 sudo[65398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:37 compute-0 python3.9[65400]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552616.646713-242-88431645273580/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:23:37 compute-0 sudo[65398]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:38 compute-0 sudo[65552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jydjrhnofhthjhdghsxzerecotfcrufa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552618.2069635-263-58633085458014/AnsiballZ_lineinfile.py'
Jan 27 22:23:38 compute-0 sudo[65552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:38 compute-0 python3.9[65554]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:23:38 compute-0 sudo[65552]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:39 compute-0 sudo[65706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjrmebcdojlnxhasoygpiqzkbtcxxxfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552619.4594984-278-105575242514038/AnsiballZ_setup.py'
Jan 27 22:23:39 compute-0 sudo[65706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:40 compute-0 python3.9[65708]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:23:40 compute-0 sudo[65706]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:41 compute-0 sudo[65790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxinyivdjpotllwywiwrxekppexmpofv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552619.4594984-278-105575242514038/AnsiballZ_systemd.py'
Jan 27 22:23:41 compute-0 sudo[65790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:41 compute-0 python3.9[65792]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:23:41 compute-0 sudo[65790]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:42 compute-0 sudo[65944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoicqtuijfajijojthatouokdclbuvah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552621.8696876-294-8200898679289/AnsiballZ_setup.py'
Jan 27 22:23:42 compute-0 sudo[65944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:42 compute-0 python3.9[65946]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:23:42 compute-0 sudo[65944]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:42 compute-0 sudo[66028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyqleqduxghytfxhuabxjcmrclnjmsfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552621.8696876-294-8200898679289/AnsiballZ_systemd.py'
Jan 27 22:23:42 compute-0 sudo[66028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:43 compute-0 python3.9[66030]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:23:43 compute-0 chronyd[792]: chronyd exiting
Jan 27 22:23:43 compute-0 systemd[1]: Stopping NTP client/server...
Jan 27 22:23:43 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 27 22:23:43 compute-0 systemd[1]: Stopped NTP client/server.
Jan 27 22:23:43 compute-0 systemd[1]: Starting NTP client/server...
Jan 27 22:23:43 compute-0 chronyd[66038]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 27 22:23:43 compute-0 chronyd[66038]: Frequency -26.491 +/- 0.370 ppm read from /var/lib/chrony/drift
Jan 27 22:23:43 compute-0 chronyd[66038]: Loaded seccomp filter (level 2)
Jan 27 22:23:43 compute-0 systemd[1]: Started NTP client/server.
Jan 27 22:23:43 compute-0 sudo[66028]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:43 compute-0 sshd-session[61182]: Connection closed by 192.168.122.30 port 54562
Jan 27 22:23:43 compute-0 sshd-session[61179]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:23:43 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 27 22:23:43 compute-0 systemd[1]: session-13.scope: Consumed 26.398s CPU time.
Jan 27 22:23:43 compute-0 systemd-logind[789]: Session 13 logged out. Waiting for processes to exit.
Jan 27 22:23:43 compute-0 systemd-logind[789]: Removed session 13.
Jan 27 22:23:49 compute-0 sshd-session[66064]: Accepted publickey for zuul from 192.168.122.30 port 44170 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:23:49 compute-0 systemd-logind[789]: New session 14 of user zuul.
Jan 27 22:23:49 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 27 22:23:49 compute-0 sshd-session[66064]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:23:50 compute-0 python3.9[66217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:23:51 compute-0 sudo[66371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxpilezeprgwzhhbipjswieuqhyylgaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552630.8853676-28-75286217732265/AnsiballZ_file.py'
Jan 27 22:23:51 compute-0 sudo[66371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:51 compute-0 python3.9[66373]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:23:51 compute-0 sudo[66371]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:52 compute-0 sudo[66546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiepdydxvonvreudlotwlmarzugsvmzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552631.7756617-36-13088831541837/AnsiballZ_stat.py'
Jan 27 22:23:52 compute-0 sudo[66546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:52 compute-0 python3.9[66548]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:23:52 compute-0 sudo[66546]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:52 compute-0 sudo[66624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lslhblmifhvjobgtsxbvewgezolbkxep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552631.7756617-36-13088831541837/AnsiballZ_file.py'
Jan 27 22:23:52 compute-0 sudo[66624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:52 compute-0 python3.9[66626]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.fqa6e3mg recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:23:52 compute-0 sudo[66624]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:53 compute-0 sudo[66776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbnsbsavwbppctoklsyuxzgggyisgcti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552633.3565547-56-243535866173783/AnsiballZ_stat.py'
Jan 27 22:23:53 compute-0 sudo[66776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:53 compute-0 python3.9[66778]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:23:53 compute-0 sudo[66776]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:54 compute-0 sudo[66899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yolrlhnyuomgtlpgbognlhfyqsnnszyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552633.3565547-56-243535866173783/AnsiballZ_copy.py'
Jan 27 22:23:54 compute-0 sudo[66899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:54 compute-0 python3.9[66901]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552633.3565547-56-243535866173783/.source _original_basename=.oawf66i4 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:23:54 compute-0 sudo[66899]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:54 compute-0 sudo[67051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqmxpeowshspgttriphmfdrmgwnsalgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552634.6822631-72-141754240114469/AnsiballZ_file.py'
Jan 27 22:23:54 compute-0 sudo[67051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:55 compute-0 python3.9[67053]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:23:55 compute-0 sudo[67051]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:55 compute-0 sudo[67203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quhcilzlbvncfuebnqtesnohibtmollg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552635.3252368-80-269766610286016/AnsiballZ_stat.py'
Jan 27 22:23:55 compute-0 sudo[67203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:55 compute-0 python3.9[67205]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:23:55 compute-0 sudo[67203]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:56 compute-0 sudo[67326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvbtpaurjxdtcvjmffbhukdniveptvoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552635.3252368-80-269766610286016/AnsiballZ_copy.py'
Jan 27 22:23:56 compute-0 sudo[67326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:56 compute-0 python3.9[67328]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552635.3252368-80-269766610286016/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:23:56 compute-0 sudo[67326]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:56 compute-0 sudo[67478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpqyqavwjfhjfwbwyozeyqlewvzjfymy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552636.438455-80-195603638561963/AnsiballZ_stat.py'
Jan 27 22:23:56 compute-0 sudo[67478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:56 compute-0 python3.9[67480]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:23:56 compute-0 sudo[67478]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:57 compute-0 sudo[67601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knurxuppyynymuwpdcetejegeuktqnbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552636.438455-80-195603638561963/AnsiballZ_copy.py'
Jan 27 22:23:57 compute-0 sudo[67601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:57 compute-0 python3.9[67603]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552636.438455-80-195603638561963/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:23:57 compute-0 sudo[67601]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:57 compute-0 sudo[67753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scvonsporbfvvqeblmkozxxgwdoekulv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552637.5801833-109-83428575267540/AnsiballZ_file.py'
Jan 27 22:23:57 compute-0 sudo[67753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:58 compute-0 python3.9[67755]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:23:58 compute-0 sudo[67753]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:58 compute-0 sudo[67905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdxcdzhwmihrqdwtnkesvaflhqaspwmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552638.1796632-117-91597181091719/AnsiballZ_stat.py'
Jan 27 22:23:58 compute-0 sudo[67905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:58 compute-0 python3.9[67907]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:23:58 compute-0 sudo[67905]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:59 compute-0 sudo[68028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrgkhbekdnkollngaasfkutfzxufbqoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552638.1796632-117-91597181091719/AnsiballZ_copy.py'
Jan 27 22:23:59 compute-0 sudo[68028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:59 compute-0 python3.9[68030]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552638.1796632-117-91597181091719/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:23:59 compute-0 sudo[68028]: pam_unix(sudo:session): session closed for user root
Jan 27 22:23:59 compute-0 sudo[68180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alowlxscukezzyzjnhvlicytjshxuhuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552639.4430892-132-216707043251285/AnsiballZ_stat.py'
Jan 27 22:23:59 compute-0 sudo[68180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:23:59 compute-0 python3.9[68182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:23:59 compute-0 sudo[68180]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:00 compute-0 sudo[68303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcozagtmqsfolkusknqebypbgzubdtio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552639.4430892-132-216707043251285/AnsiballZ_copy.py'
Jan 27 22:24:00 compute-0 sudo[68303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:00 compute-0 python3.9[68305]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552639.4430892-132-216707043251285/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:00 compute-0 sudo[68303]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:01 compute-0 sudo[68455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoaqjxgsryyaqyhtzxuiowiudqsitfed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552640.4948385-147-161296151488881/AnsiballZ_systemd.py'
Jan 27 22:24:01 compute-0 sudo[68455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:01 compute-0 python3.9[68457]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:24:01 compute-0 systemd[1]: Reloading.
Jan 27 22:24:01 compute-0 systemd-rc-local-generator[68486]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:24:01 compute-0 systemd-sysv-generator[68491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:24:01 compute-0 systemd[1]: Reloading.
Jan 27 22:24:01 compute-0 systemd-sysv-generator[68525]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:24:01 compute-0 systemd-rc-local-generator[68521]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:24:01 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 27 22:24:01 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 27 22:24:01 compute-0 sudo[68455]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:02 compute-0 sudo[68682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhnburjhzwdqyolqdmfhhjaxtrclznng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552641.9399407-155-67615694918013/AnsiballZ_stat.py'
Jan 27 22:24:02 compute-0 sudo[68682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:02 compute-0 python3.9[68684]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:02 compute-0 sudo[68682]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:02 compute-0 sudo[68805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrevvacbyzgmvjryrxsbuezczcmcwmgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552641.9399407-155-67615694918013/AnsiballZ_copy.py'
Jan 27 22:24:02 compute-0 sudo[68805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:02 compute-0 python3.9[68807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552641.9399407-155-67615694918013/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:02 compute-0 sudo[68805]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:03 compute-0 sudo[68957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boqlmfuxrdhrdsepmgcbdvcxyhmotggk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552643.0112755-170-268545572276074/AnsiballZ_stat.py'
Jan 27 22:24:03 compute-0 sudo[68957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:03 compute-0 python3.9[68959]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:03 compute-0 sudo[68957]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:03 compute-0 sudo[69080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydxdsiaaynqmirkpsfsacpxyqlriywet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552643.0112755-170-268545572276074/AnsiballZ_copy.py'
Jan 27 22:24:03 compute-0 sudo[69080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:04 compute-0 python3.9[69082]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552643.0112755-170-268545572276074/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:04 compute-0 sudo[69080]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:04 compute-0 sudo[69232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqbvkiutggrimmwpzeppcjyfhftxydtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552644.4485214-185-226460037616890/AnsiballZ_systemd.py'
Jan 27 22:24:04 compute-0 sudo[69232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:04 compute-0 python3.9[69234]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:24:05 compute-0 systemd[1]: Reloading.
Jan 27 22:24:05 compute-0 systemd-rc-local-generator[69260]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:24:05 compute-0 systemd-sysv-generator[69263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:24:05 compute-0 systemd[1]: Reloading.
Jan 27 22:24:05 compute-0 systemd-rc-local-generator[69303]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:24:05 compute-0 systemd-sysv-generator[69306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:24:05 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 22:24:05 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 22:24:05 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 22:24:05 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 22:24:05 compute-0 sudo[69232]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:06 compute-0 python3.9[69461]: ansible-ansible.builtin.service_facts Invoked
Jan 27 22:24:06 compute-0 network[69478]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 22:24:06 compute-0 network[69479]: 'network-scripts' will be removed from distribution in near future.
Jan 27 22:24:06 compute-0 network[69480]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 22:24:11 compute-0 sudo[69740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqprxquppslsjufhuzszhjcpqhpbyqbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552651.1426682-201-218217124847770/AnsiballZ_systemd.py'
Jan 27 22:24:11 compute-0 sudo[69740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:11 compute-0 python3.9[69742]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:24:11 compute-0 systemd[1]: Reloading.
Jan 27 22:24:11 compute-0 systemd-sysv-generator[69776]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:24:11 compute-0 systemd-rc-local-generator[69773]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:24:11 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 27 22:24:12 compute-0 iptables.init[69783]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 27 22:24:12 compute-0 iptables.init[69783]: iptables: Flushing firewall rules: [  OK  ]
Jan 27 22:24:12 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 27 22:24:12 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 27 22:24:12 compute-0 sudo[69740]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:12 compute-0 sudo[69977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riuuwrqhjjfiuczodxvbqfunfddxvlsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552652.4513109-201-261373467269361/AnsiballZ_systemd.py'
Jan 27 22:24:12 compute-0 sudo[69977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:13 compute-0 python3.9[69979]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:24:13 compute-0 sudo[69977]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:13 compute-0 sudo[70131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbuzwhapsijfikwmgiornfncrqnuawgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552653.3699884-217-136874491700634/AnsiballZ_systemd.py'
Jan 27 22:24:13 compute-0 sudo[70131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:13 compute-0 python3.9[70133]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:24:14 compute-0 systemd[1]: Reloading.
Jan 27 22:24:14 compute-0 systemd-sysv-generator[70167]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:24:14 compute-0 systemd-rc-local-generator[70163]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:24:14 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 27 22:24:14 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 27 22:24:14 compute-0 sudo[70131]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:15 compute-0 sudo[70323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncbfwupcdpphychlcajplerehognnifx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552654.532019-225-57317306108203/AnsiballZ_command.py'
Jan 27 22:24:15 compute-0 sudo[70323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:15 compute-0 python3.9[70325]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:24:15 compute-0 sudo[70323]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:15 compute-0 sudo[70476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjqcmaepgqszxlvbnatbynaddufjqbmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552655.5223634-239-214345638338398/AnsiballZ_stat.py'
Jan 27 22:24:15 compute-0 sudo[70476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:15 compute-0 python3.9[70478]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:15 compute-0 sudo[70476]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:16 compute-0 sudo[70601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjcwvezpkvjxzgeogrcbgbrxcadozvrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552655.5223634-239-214345638338398/AnsiballZ_copy.py'
Jan 27 22:24:16 compute-0 sudo[70601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:16 compute-0 python3.9[70603]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552655.5223634-239-214345638338398/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:16 compute-0 sudo[70601]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:17 compute-0 sudo[70754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flmtdpfymbgmcwcbekzysbmqbbysvhpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552656.839696-254-156269610201163/AnsiballZ_systemd.py'
Jan 27 22:24:17 compute-0 sudo[70754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:17 compute-0 python3.9[70756]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:24:17 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 27 22:24:17 compute-0 sshd[1004]: Received SIGHUP; restarting.
Jan 27 22:24:17 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 27 22:24:17 compute-0 sshd[1004]: Server listening on 0.0.0.0 port 22.
Jan 27 22:24:17 compute-0 sshd[1004]: Server listening on :: port 22.
Jan 27 22:24:17 compute-0 sudo[70754]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:17 compute-0 sudo[70910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpvhblgoijgyrzfgplrnqwjctehtylwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552657.6929452-262-52034480412454/AnsiballZ_file.py'
Jan 27 22:24:17 compute-0 sudo[70910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:18 compute-0 python3.9[70912]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:18 compute-0 sudo[70910]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:18 compute-0 sudo[71062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otybsskiisgtvzqndghhgsomqixnqzri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552658.3313408-270-215622941043084/AnsiballZ_stat.py'
Jan 27 22:24:18 compute-0 sudo[71062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:18 compute-0 python3.9[71064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:18 compute-0 sudo[71062]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:19 compute-0 sudo[71185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrpltaulfyxoitakkhgvgchxwtjnkeqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552658.3313408-270-215622941043084/AnsiballZ_copy.py'
Jan 27 22:24:19 compute-0 sudo[71185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:19 compute-0 python3.9[71187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552658.3313408-270-215622941043084/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:19 compute-0 sudo[71185]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:20 compute-0 sudo[71337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejjhvvqqdxsfksukxuprwnbqpgolsbgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552659.6384628-288-211275511564815/AnsiballZ_timezone.py'
Jan 27 22:24:20 compute-0 sudo[71337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:20 compute-0 python3.9[71339]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 27 22:24:20 compute-0 systemd[1]: Starting Time & Date Service...
Jan 27 22:24:20 compute-0 systemd[1]: Started Time & Date Service.
Jan 27 22:24:20 compute-0 sudo[71337]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:21 compute-0 sudo[71493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqsbcrffidiergkfvfpinshzivtdofwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552660.771302-297-56958277540367/AnsiballZ_file.py'
Jan 27 22:24:21 compute-0 sudo[71493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:21 compute-0 python3.9[71495]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:21 compute-0 sudo[71493]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:21 compute-0 sudo[71645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihzifyenyxylmcgaqibblzmglshumcwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552661.5001166-305-88568218725045/AnsiballZ_stat.py'
Jan 27 22:24:21 compute-0 sudo[71645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:22 compute-0 python3.9[71647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:22 compute-0 sudo[71645]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:22 compute-0 sudo[71768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nimdpdgfylhtuuawujornasmobihpmcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552661.5001166-305-88568218725045/AnsiballZ_copy.py'
Jan 27 22:24:22 compute-0 sudo[71768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:22 compute-0 python3.9[71770]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552661.5001166-305-88568218725045/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:22 compute-0 sudo[71768]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:23 compute-0 sudo[71920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnumoohejypgzloohutqlfiecqplymax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552662.9069145-320-238144346012197/AnsiballZ_stat.py'
Jan 27 22:24:23 compute-0 sudo[71920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:23 compute-0 python3.9[71922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:23 compute-0 sudo[71920]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:23 compute-0 sudo[72043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozrsfmkzxqtpnefipxnobcgjguaatwgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552662.9069145-320-238144346012197/AnsiballZ_copy.py'
Jan 27 22:24:23 compute-0 sudo[72043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:23 compute-0 python3.9[72045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552662.9069145-320-238144346012197/.source.yaml _original_basename=.gn5k2vua follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:23 compute-0 sudo[72043]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:24 compute-0 sudo[72195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnftrgkymulpoqtnlfqnxdbykhvatfgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552664.0569882-335-96989328424665/AnsiballZ_stat.py'
Jan 27 22:24:24 compute-0 sudo[72195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:24 compute-0 python3.9[72197]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:24 compute-0 sudo[72195]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:24 compute-0 sudo[72318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjuignfryhzginqnkqavubcnlbblctfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552664.0569882-335-96989328424665/AnsiballZ_copy.py'
Jan 27 22:24:24 compute-0 sudo[72318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:25 compute-0 python3.9[72320]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552664.0569882-335-96989328424665/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:25 compute-0 sudo[72318]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:25 compute-0 sudo[72470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opkogrknhlkjjrnoeziybkvzhvelbomd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552665.1856067-350-100429301984233/AnsiballZ_command.py'
Jan 27 22:24:25 compute-0 sudo[72470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:25 compute-0 python3.9[72472]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:24:25 compute-0 sudo[72470]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:26 compute-0 sudo[72623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfwyfxjkfoxirjockruilhabnqyweemg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552666.0837193-358-57984946918744/AnsiballZ_command.py'
Jan 27 22:24:26 compute-0 sudo[72623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:26 compute-0 python3.9[72625]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:24:26 compute-0 sudo[72623]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:27 compute-0 sudo[72776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoxarhojvhsxcafdposuqmxcmrkknopy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769552666.8549862-366-45615308998523/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 22:24:27 compute-0 sudo[72776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:27 compute-0 python3[72778]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 22:24:27 compute-0 sudo[72776]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:28 compute-0 sudo[72928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmxnattvygthifnzonribyrtbcsphdbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552667.7324426-374-116511416563582/AnsiballZ_stat.py'
Jan 27 22:24:28 compute-0 sudo[72928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:28 compute-0 python3.9[72930]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:28 compute-0 sudo[72928]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:28 compute-0 sudo[73051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyomphzcgnzigwqilxyspooztwdkjutw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552667.7324426-374-116511416563582/AnsiballZ_copy.py'
Jan 27 22:24:28 compute-0 sudo[73051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:28 compute-0 python3.9[73053]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552667.7324426-374-116511416563582/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:28 compute-0 sudo[73051]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:29 compute-0 sudo[73203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaihqfjnvijlpzincywfzzqhvrvrlzlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552669.1703951-389-255732937643225/AnsiballZ_stat.py'
Jan 27 22:24:29 compute-0 sudo[73203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:29 compute-0 python3.9[73205]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:29 compute-0 sudo[73203]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:29 compute-0 sudo[73326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwsvzcbowgpbrvcrzghlarmygaijfcga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552669.1703951-389-255732937643225/AnsiballZ_copy.py'
Jan 27 22:24:29 compute-0 sudo[73326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:30 compute-0 python3.9[73328]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552669.1703951-389-255732937643225/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:30 compute-0 sudo[73326]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:30 compute-0 sudo[73478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsrwcrazawvjkpltmyvfkxoswxqdubvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552670.3355803-404-67076493052663/AnsiballZ_stat.py'
Jan 27 22:24:30 compute-0 sudo[73478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:30 compute-0 python3.9[73480]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:30 compute-0 sudo[73478]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:31 compute-0 sudo[73601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbzycusewfmzqtcaxnhokqsnvxkhbulm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552670.3355803-404-67076493052663/AnsiballZ_copy.py'
Jan 27 22:24:31 compute-0 sudo[73601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:31 compute-0 python3.9[73603]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552670.3355803-404-67076493052663/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:31 compute-0 sudo[73601]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:31 compute-0 sudo[73753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onqewdxcccevtrtcphxjotslodmygexi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552671.6590543-419-152718561009730/AnsiballZ_stat.py'
Jan 27 22:24:31 compute-0 sudo[73753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:32 compute-0 python3.9[73755]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:32 compute-0 sudo[73753]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:32 compute-0 sudo[73876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rofvdfwgijwpilayvvbxlskmuytgmbuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552671.6590543-419-152718561009730/AnsiballZ_copy.py'
Jan 27 22:24:32 compute-0 sudo[73876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:32 compute-0 python3.9[73878]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552671.6590543-419-152718561009730/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:32 compute-0 sudo[73876]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:33 compute-0 sudo[74028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okhryztrvfcromlktvbgmjzhjbsabgpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552672.814568-434-188651357108733/AnsiballZ_stat.py'
Jan 27 22:24:33 compute-0 sudo[74028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:33 compute-0 python3.9[74030]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:24:33 compute-0 sudo[74028]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:33 compute-0 sudo[74151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwwfnjcfkhurnizqgfwluusdlgqycmiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552672.814568-434-188651357108733/AnsiballZ_copy.py'
Jan 27 22:24:33 compute-0 sudo[74151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:33 compute-0 python3.9[74153]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552672.814568-434-188651357108733/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:33 compute-0 sudo[74151]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:34 compute-0 sudo[74303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzhojduuldvndveehckwotimuefndopf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552674.0464456-449-129359696065626/AnsiballZ_file.py'
Jan 27 22:24:34 compute-0 sudo[74303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:34 compute-0 python3.9[74305]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:34 compute-0 sudo[74303]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:34 compute-0 sudo[74455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydyhqefeufdjzlefaketsdfujbmuaxgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552674.6180933-457-193157580578377/AnsiballZ_command.py'
Jan 27 22:24:34 compute-0 sudo[74455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:35 compute-0 python3.9[74457]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:24:35 compute-0 sudo[74455]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:35 compute-0 sudo[74614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acqcoevayfigxuqfoatibcarwcalbgvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552675.3585315-465-262703924524398/AnsiballZ_blockinfile.py'
Jan 27 22:24:35 compute-0 sudo[74614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:35 compute-0 python3.9[74616]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:36 compute-0 sudo[74614]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:36 compute-0 sudo[74767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugisxafhmdigkrggimwcqhqmbedvkzkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552676.2036312-474-159787830764482/AnsiballZ_file.py'
Jan 27 22:24:36 compute-0 sudo[74767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:36 compute-0 python3.9[74769]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:36 compute-0 sudo[74767]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:36 compute-0 sudo[74919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxuavunjtdliviiulqewqjemruuhyrux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552676.7615411-474-215783714837424/AnsiballZ_file.py'
Jan 27 22:24:37 compute-0 sudo[74919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:37 compute-0 python3.9[74921]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:37 compute-0 sudo[74919]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:37 compute-0 sudo[75071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svoseerwurauifuvtcljfkmpezmmalha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552677.3582115-489-240532976702346/AnsiballZ_mount.py'
Jan 27 22:24:37 compute-0 sudo[75071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:37 compute-0 python3.9[75073]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 22:24:38 compute-0 sudo[75071]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:38 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 22:24:38 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 22:24:38 compute-0 sudo[75225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wopxovnzaggbhcyhczlccstweytmaolu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552678.1238234-489-45123663673775/AnsiballZ_mount.py'
Jan 27 22:24:38 compute-0 sudo[75225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:38 compute-0 python3.9[75227]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 27 22:24:38 compute-0 sudo[75225]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:39 compute-0 sshd-session[66067]: Connection closed by 192.168.122.30 port 44170
Jan 27 22:24:39 compute-0 sshd-session[66064]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:24:39 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 27 22:24:39 compute-0 systemd[1]: session-14.scope: Consumed 34.511s CPU time.
Jan 27 22:24:39 compute-0 systemd-logind[789]: Session 14 logged out. Waiting for processes to exit.
Jan 27 22:24:39 compute-0 systemd-logind[789]: Removed session 14.
Jan 27 22:24:45 compute-0 sshd-session[75253]: Accepted publickey for zuul from 192.168.122.30 port 47782 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:24:45 compute-0 systemd-logind[789]: New session 15 of user zuul.
Jan 27 22:24:45 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 27 22:24:45 compute-0 sshd-session[75253]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:24:46 compute-0 sudo[75406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcxbrdrcfvtjqqzddgamgpzrqondnwqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552685.7004104-16-79445977804081/AnsiballZ_tempfile.py'
Jan 27 22:24:46 compute-0 sudo[75406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:46 compute-0 python3.9[75408]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 27 22:24:46 compute-0 sudo[75406]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:47 compute-0 sudo[75558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpsooorqjocrkdttrivhfehxgwouyxlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552686.6002488-28-157192111812153/AnsiballZ_stat.py'
Jan 27 22:24:47 compute-0 sudo[75558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:47 compute-0 python3.9[75560]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:24:47 compute-0 sudo[75558]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:48 compute-0 sudo[75710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvziywdrekruuwqwhnajjqwbgsoampfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552687.4725003-38-274019083543803/AnsiballZ_setup.py'
Jan 27 22:24:48 compute-0 sudo[75710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:48 compute-0 python3.9[75712]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:24:48 compute-0 sudo[75710]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:49 compute-0 sudo[75862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fofsfmbcpwntrbkyemjijbafnlesapgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552688.6391196-47-198765122348051/AnsiballZ_blockinfile.py'
Jan 27 22:24:49 compute-0 sudo[75862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:49 compute-0 python3.9[75864]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4jPkdIRJqMEws5JYMx9X2gDuJ6Y9v+mmTbPFw1r0BQ8iugTMWJMQTUGumikPEYE5Y5iSKb+SRbNUBzEg5axGaArOJNd97VcCYstQC4zolmKN2gQ2TAqYCQxvTtSRWfiAQGvEtzch2f5fSJHiTn51jtKm5o3ra9vlFK1JRR0LkApW6vrCtQFvd4b54Ue7uHqQQUgpsqKdhwtYxIHOD2fpaVboKJ4OsITWn7vHKzhu9kQY8QTbcO7g1/dm6Ku7sl1tzWpAqMXd8BgxwIdXoo7/cheNB8gC5PGNRONATJdJK9uzneR0Pwxyfln/6dPa0rlNFBSNJ3UXEyWXcePMPOLv/LLoPOk6dpRWBSsSNWYZg9rTrtT7307Qek1iNmYQA6yvj4vN9VC7dnR/vGrUJS6LRBxY34wWLbsxJzN6w/ILhkwu56n22/yWKi8YEVlsgd6RCUUWp17pCNpCxNRjl9X0LR7W8DqpMsLrwiqVocnAPWn61fzz/jYPWkJ6xjP64RYU=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIODyzNmV+V2um5yBKq2V+Q6+Ke7IQtkK7CQ6XSR8pCMC
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIlO1lci6F+2qJpYGC679BQlKv6WaUzrKVSIDZtG0VvWUW/tSqrpa7/asLay+TKK2TFpKRb8fD5vaNePdZArE00=
                                             create=True mode=0644 path=/tmp/ansible.24jk6zcl state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:49 compute-0 sudo[75862]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:50 compute-0 sudo[76014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjthyasepgqkatjfmzyqhwhxjjhdwyhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552689.620353-55-220292829169474/AnsiballZ_command.py'
Jan 27 22:24:50 compute-0 sudo[76014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:50 compute-0 python3.9[76016]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.24jk6zcl' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:24:50 compute-0 sudo[76014]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:50 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 27 22:24:50 compute-0 sudo[76170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuwhjfrlqqbhicsiurubbplajszzsflp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552690.5105355-63-70883805451296/AnsiballZ_file.py'
Jan 27 22:24:50 compute-0 sudo[76170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:24:51 compute-0 python3.9[76172]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.24jk6zcl state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:24:51 compute-0 sudo[76170]: pam_unix(sudo:session): session closed for user root
Jan 27 22:24:51 compute-0 sshd-session[75256]: Connection closed by 192.168.122.30 port 47782
Jan 27 22:24:51 compute-0 sshd-session[75253]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:24:51 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 27 22:24:51 compute-0 systemd[1]: session-15.scope: Consumed 3.857s CPU time.
Jan 27 22:24:51 compute-0 systemd-logind[789]: Session 15 logged out. Waiting for processes to exit.
Jan 27 22:24:51 compute-0 systemd-logind[789]: Removed session 15.
Jan 27 22:24:57 compute-0 sshd-session[76197]: Accepted publickey for zuul from 192.168.122.30 port 58960 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:24:57 compute-0 systemd-logind[789]: New session 16 of user zuul.
Jan 27 22:24:58 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 27 22:24:58 compute-0 sshd-session[76197]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:24:59 compute-0 python3.9[76350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:24:59 compute-0 sudo[76504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awijzrvkmhhnuxvecqxsztfpillilrov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552699.364458-27-42717590564899/AnsiballZ_systemd.py'
Jan 27 22:24:59 compute-0 sudo[76504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:00 compute-0 python3.9[76506]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 22:25:00 compute-0 sudo[76504]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:00 compute-0 sudo[76658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qntnmihdvzvmrtfgutefrugeqxhszofx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552700.6043854-35-188776799019844/AnsiballZ_systemd.py'
Jan 27 22:25:00 compute-0 sudo[76658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:01 compute-0 python3.9[76660]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:25:01 compute-0 sudo[76658]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:01 compute-0 sudo[76811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdsjgbmogvungfeqvdpbjivlzoygynqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552701.4041278-44-275654866220103/AnsiballZ_command.py'
Jan 27 22:25:01 compute-0 sudo[76811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:02 compute-0 python3.9[76813]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:25:02 compute-0 sudo[76811]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:02 compute-0 sudo[76964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydpqxlfwwxlhfmaqhjxircugcwywgyps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552702.2187922-52-239457101606214/AnsiballZ_stat.py'
Jan 27 22:25:02 compute-0 sudo[76964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:02 compute-0 python3.9[76966]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:25:02 compute-0 sudo[76964]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:03 compute-0 sudo[77118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umdiiqmvcnjizuvawytwahaxoefolqyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552703.0107133-60-88233703619663/AnsiballZ_command.py'
Jan 27 22:25:03 compute-0 sudo[77118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:03 compute-0 python3.9[77120]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:25:03 compute-0 sudo[77118]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:04 compute-0 sudo[77273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bszpvinctmwhficrqzlwzjpkxyglvsto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552703.7542994-68-279828693273700/AnsiballZ_file.py'
Jan 27 22:25:04 compute-0 sudo[77273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:04 compute-0 python3.9[77275]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:04 compute-0 sudo[77273]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:04 compute-0 sshd-session[76200]: Connection closed by 192.168.122.30 port 58960
Jan 27 22:25:04 compute-0 sshd-session[76197]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:25:04 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 27 22:25:04 compute-0 systemd[1]: session-16.scope: Consumed 4.550s CPU time.
Jan 27 22:25:04 compute-0 systemd-logind[789]: Session 16 logged out. Waiting for processes to exit.
Jan 27 22:25:04 compute-0 systemd-logind[789]: Removed session 16.
Jan 27 22:25:10 compute-0 sshd-session[77300]: Accepted publickey for zuul from 192.168.122.30 port 35916 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:25:10 compute-0 systemd-logind[789]: New session 17 of user zuul.
Jan 27 22:25:10 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 27 22:25:10 compute-0 sshd-session[77300]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:25:10 compute-0 sshd-session[77302]: Received disconnect from 91.224.92.108 port 29262:11:  [preauth]
Jan 27 22:25:10 compute-0 sshd-session[77302]: Disconnected from authenticating user root 91.224.92.108 port 29262 [preauth]
Jan 27 22:25:11 compute-0 python3.9[77455]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:25:12 compute-0 sudo[77609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taanlscdzrrktygsenjgfkahpdhicgvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552711.7742057-29-158656200059342/AnsiballZ_setup.py'
Jan 27 22:25:12 compute-0 sudo[77609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:12 compute-0 python3.9[77611]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:25:12 compute-0 sudo[77609]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:13 compute-0 sudo[77693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnmhyzhujlvskroavfwiuiygdfrperwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552711.7742057-29-158656200059342/AnsiballZ_dnf.py'
Jan 27 22:25:13 compute-0 sudo[77693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:13 compute-0 python3.9[77695]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 27 22:25:14 compute-0 sudo[77693]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:15 compute-0 python3.9[77846]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:25:16 compute-0 python3.9[77997]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 22:25:17 compute-0 python3.9[78147]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:25:18 compute-0 python3.9[78297]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:25:18 compute-0 sshd-session[77305]: Connection closed by 192.168.122.30 port 35916
Jan 27 22:25:18 compute-0 sshd-session[77300]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:25:18 compute-0 systemd-logind[789]: Session 17 logged out. Waiting for processes to exit.
Jan 27 22:25:18 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 27 22:25:18 compute-0 systemd[1]: session-17.scope: Consumed 5.989s CPU time.
Jan 27 22:25:18 compute-0 systemd-logind[789]: Removed session 17.
Jan 27 22:25:24 compute-0 sshd-session[78322]: Accepted publickey for zuul from 192.168.122.30 port 36812 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:25:24 compute-0 systemd-logind[789]: New session 18 of user zuul.
Jan 27 22:25:24 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 27 22:25:24 compute-0 sshd-session[78322]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:25:25 compute-0 python3.9[78475]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:25:26 compute-0 sudo[78629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmpyoslxjxahdpiaqcvprpueslhhasbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552726.5172668-45-86764118049581/AnsiballZ_file.py'
Jan 27 22:25:26 compute-0 sudo[78629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:27 compute-0 python3.9[78631]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:27 compute-0 sudo[78629]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:27 compute-0 sudo[78781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nikxczoxicskjytmbqnofwpqfxyqsttj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552727.2377913-45-112296849917162/AnsiballZ_file.py'
Jan 27 22:25:27 compute-0 sudo[78781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:27 compute-0 python3.9[78783]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:27 compute-0 sudo[78781]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:28 compute-0 sudo[78933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlbuamlbapefhyhkfxybpejifckeowje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552727.8516786-60-115372385367203/AnsiballZ_stat.py'
Jan 27 22:25:28 compute-0 sudo[78933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:28 compute-0 python3.9[78935]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:28 compute-0 sudo[78933]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:28 compute-0 sudo[79056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csmybeoyofbeefccdvtjusxjjhmmhejo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552727.8516786-60-115372385367203/AnsiballZ_copy.py'
Jan 27 22:25:28 compute-0 sudo[79056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:29 compute-0 python3.9[79058]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552727.8516786-60-115372385367203/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=59978dd7ca3db714e5f1d571c55651b74146acdc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:29 compute-0 sudo[79056]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:29 compute-0 sudo[79208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urghbfhiaklfjamarrbhdlblofabvgnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552729.2630377-60-249273670157917/AnsiballZ_stat.py'
Jan 27 22:25:29 compute-0 sudo[79208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:29 compute-0 python3.9[79210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:29 compute-0 sudo[79208]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:30 compute-0 sudo[79331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bocjsuyqilrkmoascbvrclicyomawhfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552729.2630377-60-249273670157917/AnsiballZ_copy.py'
Jan 27 22:25:30 compute-0 sudo[79331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:30 compute-0 python3.9[79333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552729.2630377-60-249273670157917/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a749d8a82b0055fe54420a1c96b6f4d20ebc23d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:30 compute-0 sudo[79331]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:30 compute-0 sudo[79483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poxgaageqaxerxkcyzebxguljbchotvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552730.502724-60-198356952056076/AnsiballZ_stat.py'
Jan 27 22:25:30 compute-0 sudo[79483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:30 compute-0 python3.9[79485]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:30 compute-0 sudo[79483]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:31 compute-0 sudo[79606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqsuaeqdbucauxwejvoajksowmuggbjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552730.502724-60-198356952056076/AnsiballZ_copy.py'
Jan 27 22:25:31 compute-0 sudo[79606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:31 compute-0 python3.9[79608]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552730.502724-60-198356952056076/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=dcb94f60f3c2c1b3c01ec3bb28618d7412b2d659 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:31 compute-0 sudo[79606]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:31 compute-0 sudo[79758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akxqdfgpydgfsrytnwnapudtrdonitwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552731.6272712-104-64462567277708/AnsiballZ_file.py'
Jan 27 22:25:31 compute-0 sudo[79758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:32 compute-0 python3.9[79760]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:32 compute-0 sudo[79758]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:32 compute-0 sudo[79910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmneaierivrcdarscgrcwctcemsdbbns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552732.195137-104-190889904938615/AnsiballZ_file.py'
Jan 27 22:25:32 compute-0 sudo[79910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:32 compute-0 python3.9[79912]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:32 compute-0 sudo[79910]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:33 compute-0 sudo[80062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uectmebfxttgcxjyfxcxsnuqeguxhxos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552732.9514768-119-118911145606086/AnsiballZ_stat.py'
Jan 27 22:25:33 compute-0 sudo[80062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:33 compute-0 python3.9[80064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:33 compute-0 sudo[80062]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:33 compute-0 sudo[80185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrzkqwmewprwpljowkhvvqngnasqtpiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552732.9514768-119-118911145606086/AnsiballZ_copy.py'
Jan 27 22:25:33 compute-0 sudo[80185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:33 compute-0 python3.9[80187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552732.9514768-119-118911145606086/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d2c80533007d01d2df2496da38d5bf05713afef8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:33 compute-0 sudo[80185]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:34 compute-0 sudo[80337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdebyuphzkvthwdlbewpulcvwdtvkhbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552733.9961603-119-195767599315326/AnsiballZ_stat.py'
Jan 27 22:25:34 compute-0 sudo[80337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:34 compute-0 python3.9[80339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:34 compute-0 sudo[80337]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:34 compute-0 sudo[80460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjhceqodefapluwheurtgufbrytkhaie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552733.9961603-119-195767599315326/AnsiballZ_copy.py'
Jan 27 22:25:34 compute-0 sudo[80460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:34 compute-0 python3.9[80462]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552733.9961603-119-195767599315326/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a749d8a82b0055fe54420a1c96b6f4d20ebc23d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:34 compute-0 sudo[80460]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:35 compute-0 sudo[80612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fixhqsdssbnjlvrkvipalzhczpxwbejo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552734.993941-119-273586818409249/AnsiballZ_stat.py'
Jan 27 22:25:35 compute-0 sudo[80612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:35 compute-0 python3.9[80614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:35 compute-0 sudo[80612]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:35 compute-0 sudo[80735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grbhwbdqdpwpqncthhccwyjotwxscsgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552734.993941-119-273586818409249/AnsiballZ_copy.py'
Jan 27 22:25:35 compute-0 sudo[80735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:35 compute-0 python3.9[80737]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552734.993941-119-273586818409249/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e5de025e55641ddb8259bc5d833b0c69cff62e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:36 compute-0 sudo[80735]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:36 compute-0 sudo[80887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eagrqgksnuykghatdpokxershjnywwsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552736.201551-163-132099022006278/AnsiballZ_file.py'
Jan 27 22:25:36 compute-0 sudo[80887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:36 compute-0 python3.9[80889]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:36 compute-0 sudo[80887]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:37 compute-0 sudo[81039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibrivpfvytxyonvlgxxoukagmarpbcls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552736.9634733-163-224708806166845/AnsiballZ_file.py'
Jan 27 22:25:37 compute-0 sudo[81039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:37 compute-0 python3.9[81041]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:37 compute-0 sudo[81039]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:37 compute-0 sudo[81191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgrpqiyqwhrouvsakjvstwsbqotqgehs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552737.6589487-178-33248976885792/AnsiballZ_stat.py'
Jan 27 22:25:37 compute-0 sudo[81191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:38 compute-0 python3.9[81193]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:38 compute-0 sudo[81191]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:38 compute-0 sudo[81314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zouodwopvgrwxnbhgtvotssiezukptes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552737.6589487-178-33248976885792/AnsiballZ_copy.py'
Jan 27 22:25:38 compute-0 sudo[81314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:38 compute-0 python3.9[81316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552737.6589487-178-33248976885792/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f1ccefb93c5d3c332f2dd2cfb6fe0a9e4919400d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:38 compute-0 sudo[81314]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:39 compute-0 sudo[81466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtcyqjmxxpkjkarimnosbgqacgzdtuts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552738.8679733-178-189918016823051/AnsiballZ_stat.py'
Jan 27 22:25:39 compute-0 sudo[81466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:39 compute-0 python3.9[81468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:39 compute-0 sudo[81466]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:39 compute-0 sudo[81589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zctkawxgsdbqxoaugbotjovzwfhgqcye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552738.8679733-178-189918016823051/AnsiballZ_copy.py'
Jan 27 22:25:39 compute-0 sudo[81589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:39 compute-0 python3.9[81591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552738.8679733-178-189918016823051/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3815b537227f8e7d2ddf12cbc1b9b8f600a2a073 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:39 compute-0 sudo[81589]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:40 compute-0 sudo[81741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xajmotibyzdxqgyadocxvciblybosqbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552740.1065526-178-204607000003858/AnsiballZ_stat.py'
Jan 27 22:25:40 compute-0 sudo[81741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:40 compute-0 python3.9[81743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:40 compute-0 sudo[81741]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:40 compute-0 sudo[81864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzgrkbnfzxsyfewjuyvmlzfmffetlcdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552740.1065526-178-204607000003858/AnsiballZ_copy.py'
Jan 27 22:25:40 compute-0 sudo[81864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:41 compute-0 python3.9[81866]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552740.1065526-178-204607000003858/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=82fc3bb63b492273ba11394b9f58ca8284ee5101 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:41 compute-0 sudo[81864]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:41 compute-0 sudo[82016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjniwklkiydceyietdtfgxmqfbwfgvan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552741.3735826-222-252844338127058/AnsiballZ_file.py'
Jan 27 22:25:41 compute-0 sudo[82016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:41 compute-0 python3.9[82018]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:41 compute-0 sudo[82016]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:42 compute-0 sudo[82168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjcuwtbpxypibyjnuincwhnbllnvxigc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552742.0011742-222-231682136426386/AnsiballZ_file.py'
Jan 27 22:25:42 compute-0 sudo[82168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:42 compute-0 python3.9[82170]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:42 compute-0 sudo[82168]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:42 compute-0 sudo[82320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfkaijihrnvrmptasdytddqestjxngdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552742.596197-237-240031983206413/AnsiballZ_stat.py'
Jan 27 22:25:42 compute-0 sudo[82320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:43 compute-0 python3.9[82322]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:43 compute-0 sudo[82320]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:43 compute-0 sudo[82443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfpibbivtgwbjaqxweazujjorsbbmsrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552742.596197-237-240031983206413/AnsiballZ_copy.py'
Jan 27 22:25:43 compute-0 sudo[82443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:43 compute-0 python3.9[82445]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552742.596197-237-240031983206413/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=dc5238c8b508e36976848ff2887af4d94e4fd75b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:43 compute-0 sudo[82443]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:43 compute-0 sudo[82595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkbtwdtlwirzlundeiydgogjwhjukjle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552743.6266503-237-243649297425926/AnsiballZ_stat.py'
Jan 27 22:25:43 compute-0 sudo[82595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:44 compute-0 python3.9[82597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:44 compute-0 sudo[82595]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:44 compute-0 sudo[82718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ormumzwnttgifijzlkjrqzgdxbbwzxwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552743.6266503-237-243649297425926/AnsiballZ_copy.py'
Jan 27 22:25:44 compute-0 sudo[82718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:44 compute-0 python3.9[82720]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552743.6266503-237-243649297425926/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=eeb3fcf9853ed8323df55c1a099784026d442f34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:44 compute-0 sudo[82718]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:45 compute-0 sudo[82870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czurncjjjiwdwismgkwfacyzoahmseie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552744.773991-237-268083199186916/AnsiballZ_stat.py'
Jan 27 22:25:45 compute-0 sudo[82870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:45 compute-0 python3.9[82872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:45 compute-0 sudo[82870]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:45 compute-0 sudo[82993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlxubpxpynylumkoqniajrgfmdkzkfuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552744.773991-237-268083199186916/AnsiballZ_copy.py'
Jan 27 22:25:45 compute-0 sudo[82993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:45 compute-0 python3.9[82995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552744.773991-237-268083199186916/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d4d268c84024b7a697ecf8cd83bcedad2a491b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:45 compute-0 sudo[82993]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:46 compute-0 sudo[83145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdhpbwfpwhcytxsydckjyirjnweaeusg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552745.8740644-281-132248222509621/AnsiballZ_file.py'
Jan 27 22:25:46 compute-0 sudo[83145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:46 compute-0 python3.9[83147]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:46 compute-0 sudo[83145]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:46 compute-0 sudo[83297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjwzdntrpnmaxdqhccqevvngaembcccq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552746.3970644-281-241516198243131/AnsiballZ_file.py'
Jan 27 22:25:46 compute-0 sudo[83297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:46 compute-0 python3.9[83299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:46 compute-0 sudo[83297]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:47 compute-0 sudo[83449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbwhanbqymqnzeisluequpcekypvqqfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552747.105642-296-243530466902092/AnsiballZ_stat.py'
Jan 27 22:25:47 compute-0 sudo[83449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:47 compute-0 python3.9[83451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:47 compute-0 sudo[83449]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:47 compute-0 sudo[83572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajgmemdfqirhegxdiwfnbcifslecvzxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552747.105642-296-243530466902092/AnsiballZ_copy.py'
Jan 27 22:25:47 compute-0 sudo[83572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:48 compute-0 python3.9[83574]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552747.105642-296-243530466902092/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=042b1a7e496a6f045b4b515c241796a928eaa7e0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:48 compute-0 sudo[83572]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:48 compute-0 sudo[83724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvvkkgilobnrhwqiwvzvxlmpoevcazsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552748.1760392-296-75905484140547/AnsiballZ_stat.py'
Jan 27 22:25:48 compute-0 sudo[83724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:48 compute-0 python3.9[83726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:48 compute-0 sudo[83724]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:48 compute-0 sudo[83847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oefwjrykmvfqijlvamqnxsbrecjkccmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552748.1760392-296-75905484140547/AnsiballZ_copy.py'
Jan 27 22:25:48 compute-0 sudo[83847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:49 compute-0 python3.9[83849]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552748.1760392-296-75905484140547/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3815b537227f8e7d2ddf12cbc1b9b8f600a2a073 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:49 compute-0 sudo[83847]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:49 compute-0 sudo[83999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njbwebkmaroewkujhwxeznxtcdzgrneu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552749.233924-296-154924600320501/AnsiballZ_stat.py'
Jan 27 22:25:49 compute-0 sudo[83999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:49 compute-0 python3.9[84001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:49 compute-0 sudo[83999]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:49 compute-0 sudo[84122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfxuuvjmlzaogffuvnzcyfkmmaebvxbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552749.233924-296-154924600320501/AnsiballZ_copy.py'
Jan 27 22:25:49 compute-0 sudo[84122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:50 compute-0 python3.9[84124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552749.233924-296-154924600320501/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e62118dcb692861b6d4a2221644724ffc1cea231 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:50 compute-0 sudo[84122]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:51 compute-0 sudo[84274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkyyawaaljcywpycidzvdiqfqffurmuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552750.849671-356-222474471624770/AnsiballZ_file.py'
Jan 27 22:25:51 compute-0 sudo[84274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:51 compute-0 python3.9[84276]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:51 compute-0 sudo[84274]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:51 compute-0 sudo[84426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdeyljbpylhdtaezgrfsjyglknkvrssl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552751.4213936-364-229959576104274/AnsiballZ_stat.py'
Jan 27 22:25:51 compute-0 sudo[84426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:51 compute-0 python3.9[84428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:51 compute-0 sudo[84426]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:52 compute-0 sudo[84549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arvuzbhphxxapfkodwzeliadneriuaxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552751.4213936-364-229959576104274/AnsiballZ_copy.py'
Jan 27 22:25:52 compute-0 sudo[84549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:52 compute-0 chronyd[66038]: Selected source 216.232.132.102 (pool.ntp.org)
Jan 27 22:25:52 compute-0 python3.9[84551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552751.4213936-364-229959576104274/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=599e206bd571b4f5a31985a590e147a0494141e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:52 compute-0 sudo[84549]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:52 compute-0 sudo[84701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zplluogeleqhmtfotmgcwfmnunolmmsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552752.5881798-380-132950236306194/AnsiballZ_file.py'
Jan 27 22:25:52 compute-0 sudo[84701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:53 compute-0 python3.9[84703]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:53 compute-0 sudo[84701]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:53 compute-0 sudo[84853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkwuahqzhanxthhacfatqcwsagujlnyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552753.241951-388-183194471828958/AnsiballZ_stat.py'
Jan 27 22:25:53 compute-0 sudo[84853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:53 compute-0 python3.9[84855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:53 compute-0 sudo[84853]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:54 compute-0 sudo[84976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnjzvqcjdmshlutelselbjxhferuyndp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552753.241951-388-183194471828958/AnsiballZ_copy.py'
Jan 27 22:25:54 compute-0 sudo[84976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:54 compute-0 python3.9[84978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552753.241951-388-183194471828958/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=599e206bd571b4f5a31985a590e147a0494141e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:54 compute-0 sudo[84976]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:54 compute-0 sudo[85128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdkkrgncgaplkueshvdncbsskfxhgxxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552754.4570172-404-24351359472891/AnsiballZ_file.py'
Jan 27 22:25:54 compute-0 sudo[85128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:54 compute-0 python3.9[85130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:54 compute-0 sudo[85128]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:55 compute-0 sudo[85280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lobrnrangfyzhuiaivdozuctcdsphwtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552755.111416-412-240463691719971/AnsiballZ_stat.py'
Jan 27 22:25:55 compute-0 sudo[85280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:55 compute-0 python3.9[85282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:55 compute-0 sudo[85280]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:55 compute-0 sudo[85403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-przpcmrwxuyqdppcbgjslpymjkskugkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552755.111416-412-240463691719971/AnsiballZ_copy.py'
Jan 27 22:25:55 compute-0 sudo[85403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:56 compute-0 python3.9[85405]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552755.111416-412-240463691719971/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=599e206bd571b4f5a31985a590e147a0494141e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:56 compute-0 sudo[85403]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:56 compute-0 sudo[85555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svazbyqjznexclflntwgltxghusdcgvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552756.2025418-428-28839314730250/AnsiballZ_file.py'
Jan 27 22:25:56 compute-0 sudo[85555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:56 compute-0 python3.9[85557]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:56 compute-0 sudo[85555]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:57 compute-0 sudo[85707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyuiiuwmqkyidzimobykxzzfeqqxcwgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552756.793512-436-215195328553183/AnsiballZ_stat.py'
Jan 27 22:25:57 compute-0 sudo[85707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:57 compute-0 python3.9[85709]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:57 compute-0 sudo[85707]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:57 compute-0 sudo[85830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgywyqpmtpjgfxrjeisvtseabcxtduay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552756.793512-436-215195328553183/AnsiballZ_copy.py'
Jan 27 22:25:57 compute-0 sudo[85830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:57 compute-0 python3.9[85832]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552756.793512-436-215195328553183/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=599e206bd571b4f5a31985a590e147a0494141e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:57 compute-0 sudo[85830]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:58 compute-0 sudo[85982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqkwoadmcgrtlrtfohjvulgshqtrkmaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552757.9217129-452-88373396930114/AnsiballZ_file.py'
Jan 27 22:25:58 compute-0 sudo[85982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:58 compute-0 python3.9[85984]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:25:58 compute-0 sudo[85982]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:58 compute-0 sudo[86134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhpotfswxutvmybuoplnucwkqklcjbip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552758.5355356-460-191280578520839/AnsiballZ_stat.py'
Jan 27 22:25:58 compute-0 sudo[86134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:58 compute-0 python3.9[86136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:25:58 compute-0 sudo[86134]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:59 compute-0 sudo[86257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tokzgomalvtnebvdzvxwkfkwsqhpwuob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552758.5355356-460-191280578520839/AnsiballZ_copy.py'
Jan 27 22:25:59 compute-0 sudo[86257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:25:59 compute-0 python3.9[86259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552758.5355356-460-191280578520839/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=599e206bd571b4f5a31985a590e147a0494141e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:25:59 compute-0 sudo[86257]: pam_unix(sudo:session): session closed for user root
Jan 27 22:25:59 compute-0 sudo[86409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzacukxdsplqpuftuofnxgnycbbzbsyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552759.5958767-476-95662576156161/AnsiballZ_file.py'
Jan 27 22:25:59 compute-0 sudo[86409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:00 compute-0 python3.9[86411]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:26:00 compute-0 sudo[86409]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:00 compute-0 sudo[86561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zumvttqaocmazfourescfjqhdnmyoitj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552760.1777053-484-123947522995579/AnsiballZ_stat.py'
Jan 27 22:26:00 compute-0 sudo[86561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:00 compute-0 python3.9[86563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:00 compute-0 sudo[86561]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:00 compute-0 sudo[86684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tecdohoizncxpkzcmcgciabufevxiyqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552760.1777053-484-123947522995579/AnsiballZ_copy.py'
Jan 27 22:26:00 compute-0 sudo[86684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:01 compute-0 python3.9[86686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552760.1777053-484-123947522995579/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=599e206bd571b4f5a31985a590e147a0494141e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:01 compute-0 sudo[86684]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:01 compute-0 sudo[86836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyrwqvqaihpjlwdehyypoouswtcudepk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552761.3606775-500-86830710044044/AnsiballZ_file.py'
Jan 27 22:26:01 compute-0 sudo[86836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:01 compute-0 python3.9[86838]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:26:01 compute-0 sudo[86836]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:02 compute-0 sudo[86988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njlzthfvcnxgwdgwbbzytwbpyjwftyab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552762.0043855-508-155926326685950/AnsiballZ_stat.py'
Jan 27 22:26:02 compute-0 sudo[86988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:02 compute-0 python3.9[86990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:02 compute-0 sudo[86988]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:02 compute-0 sudo[87111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykibmznntwktggubgjftgeqwrzvoqhda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552762.0043855-508-155926326685950/AnsiballZ_copy.py'
Jan 27 22:26:02 compute-0 sudo[87111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:02 compute-0 python3.9[87113]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552762.0043855-508-155926326685950/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=599e206bd571b4f5a31985a590e147a0494141e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:02 compute-0 sudo[87111]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:03 compute-0 sudo[87263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynrkquevypqrztdacbrlhstlrvgpdwew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552763.1866791-524-254323912373842/AnsiballZ_file.py'
Jan 27 22:26:03 compute-0 sudo[87263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:03 compute-0 python3.9[87265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:26:03 compute-0 sudo[87263]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:04 compute-0 sudo[87415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvrbuhfghwlheaytzgeyphaijjyryjmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552763.7951925-532-213075705467583/AnsiballZ_stat.py'
Jan 27 22:26:04 compute-0 sudo[87415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:04 compute-0 python3.9[87417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:04 compute-0 sudo[87415]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:04 compute-0 sudo[87538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eutsasdqjsjwfekexhomzsgynlhhqoly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552763.7951925-532-213075705467583/AnsiballZ_copy.py'
Jan 27 22:26:04 compute-0 sudo[87538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:04 compute-0 python3.9[87540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552763.7951925-532-213075705467583/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=599e206bd571b4f5a31985a590e147a0494141e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:04 compute-0 sudo[87538]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:05 compute-0 sshd-session[78325]: Connection closed by 192.168.122.30 port 36812
Jan 27 22:26:05 compute-0 sshd-session[78322]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:26:05 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 27 22:26:05 compute-0 systemd[1]: session-18.scope: Consumed 33.074s CPU time.
Jan 27 22:26:05 compute-0 systemd-logind[789]: Session 18 logged out. Waiting for processes to exit.
Jan 27 22:26:05 compute-0 systemd-logind[789]: Removed session 18.
Jan 27 22:26:11 compute-0 sshd-session[87565]: Accepted publickey for zuul from 192.168.122.30 port 59840 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:26:11 compute-0 systemd-logind[789]: New session 19 of user zuul.
Jan 27 22:26:11 compute-0 systemd[1]: Started Session 19 of User zuul.
Jan 27 22:26:11 compute-0 sshd-session[87565]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:26:12 compute-0 python3.9[87718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:26:13 compute-0 sudo[87872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptgvwetvrrbifvkwqtibcrkgqlibamlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552772.714494-29-41203075798524/AnsiballZ_file.py'
Jan 27 22:26:13 compute-0 sudo[87872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:13 compute-0 python3.9[87874]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:26:13 compute-0 sudo[87872]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:13 compute-0 sudo[88024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmxtgkzgptqxijjbdpmpokvuikcpipcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552773.4134705-29-1314376492465/AnsiballZ_file.py'
Jan 27 22:26:13 compute-0 sudo[88024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:13 compute-0 python3.9[88026]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:26:13 compute-0 sudo[88024]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:14 compute-0 python3.9[88176]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:26:15 compute-0 sudo[88326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kadqxykergqozdofbtjfgxvmffsrqbrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552774.7360575-52-250086330322625/AnsiballZ_seboolean.py'
Jan 27 22:26:15 compute-0 sudo[88326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:15 compute-0 python3.9[88328]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 27 22:26:16 compute-0 sudo[88326]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:16 compute-0 sudo[88482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmenhekxelsbhrkkyehyefmguukeyoqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552776.5989447-62-72547980673615/AnsiballZ_setup.py'
Jan 27 22:26:16 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 27 22:26:16 compute-0 sudo[88482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:17 compute-0 python3.9[88484]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:26:17 compute-0 sudo[88482]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:17 compute-0 sudo[88566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqupjcdhrfpcrlxnazxgspeowuzmxnzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552776.5989447-62-72547980673615/AnsiballZ_dnf.py'
Jan 27 22:26:17 compute-0 sudo[88566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:18 compute-0 python3.9[88568]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:26:19 compute-0 sudo[88566]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:20 compute-0 sudo[88719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjthjtjzkqdetdqrhgtjyfmsxfhdkqnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552779.414689-74-3205249485291/AnsiballZ_systemd.py'
Jan 27 22:26:20 compute-0 sudo[88719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:20 compute-0 python3.9[88721]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 22:26:20 compute-0 sudo[88719]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:20 compute-0 sudo[88874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvptgzmkjexzuvidyovemqbxuocbgbso ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769552780.5476546-82-257601833465791/AnsiballZ_edpm_nftables_snippet.py'
Jan 27 22:26:20 compute-0 sudo[88874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:21 compute-0 python3[88876]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 27 22:26:21 compute-0 sudo[88874]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:21 compute-0 sudo[89026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuoudpwrkkgfofxzjzwkcvjzczyxppep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552781.46633-91-278398597028888/AnsiballZ_file.py'
Jan 27 22:26:21 compute-0 sudo[89026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:22 compute-0 python3.9[89028]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:22 compute-0 sudo[89026]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:22 compute-0 sudo[89178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbgdkxemmpfnaksjxgjgcoooswhboeej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552782.209694-99-155822043535867/AnsiballZ_stat.py'
Jan 27 22:26:22 compute-0 sudo[89178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:22 compute-0 python3.9[89180]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:22 compute-0 sudo[89178]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:23 compute-0 sudo[89256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhrrorpsokxqkfoconkhfctzcaywtana ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552782.209694-99-155822043535867/AnsiballZ_file.py'
Jan 27 22:26:23 compute-0 sudo[89256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:23 compute-0 python3.9[89258]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:23 compute-0 sudo[89256]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:23 compute-0 sudo[89408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhtzregjmznmdgdyjzguqobsqavkgbgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552783.3839657-111-249203223933849/AnsiballZ_stat.py'
Jan 27 22:26:23 compute-0 sudo[89408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:23 compute-0 python3.9[89410]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:23 compute-0 sudo[89408]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:24 compute-0 sudo[89486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebpnjqmgrurnbasjekqjalohfcacgnvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552783.3839657-111-249203223933849/AnsiballZ_file.py'
Jan 27 22:26:24 compute-0 sudo[89486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:24 compute-0 python3.9[89488]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.hx2zh02x recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:24 compute-0 sudo[89486]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:24 compute-0 sudo[89638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbnxlfwdydarzpnmlvrtvkwairqdykss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552784.487198-123-280608169347635/AnsiballZ_stat.py'
Jan 27 22:26:24 compute-0 sudo[89638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:24 compute-0 python3.9[89640]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:25 compute-0 sudo[89638]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:25 compute-0 sudo[89716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opbusgonfyokpyhvxucenxfvyrtsfjox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552784.487198-123-280608169347635/AnsiballZ_file.py'
Jan 27 22:26:25 compute-0 sudo[89716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:25 compute-0 python3.9[89718]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:25 compute-0 sudo[89716]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:26 compute-0 sudo[89868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alimbrzeszsojyoexrijkaxbxzyxeioi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552785.6130784-136-153200002420724/AnsiballZ_command.py'
Jan 27 22:26:26 compute-0 sudo[89868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:26 compute-0 python3.9[89870]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:26:26 compute-0 sudo[89868]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:26 compute-0 sudo[90021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdbumsxevnkzzxuwanrubyhnshcclbvh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769552786.4001014-144-189018113181793/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 22:26:26 compute-0 sudo[90021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:27 compute-0 python3[90023]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 22:26:27 compute-0 sudo[90021]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:27 compute-0 sudo[90173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxipemqbftwublmtvdwcagtjdwrnxqpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552787.2921085-152-115320268688106/AnsiballZ_stat.py'
Jan 27 22:26:27 compute-0 sudo[90173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:27 compute-0 python3.9[90175]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:27 compute-0 sudo[90173]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:28 compute-0 sudo[90298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbfeamhzczaimlfvzfiuvucjmtethwfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552787.2921085-152-115320268688106/AnsiballZ_copy.py'
Jan 27 22:26:28 compute-0 sudo[90298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:28 compute-0 python3.9[90300]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552787.2921085-152-115320268688106/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:28 compute-0 sudo[90298]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:29 compute-0 sudo[90450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axiusznkqhmdavuwkgewvvjsgnhmxgls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552788.8257444-167-229190747992448/AnsiballZ_stat.py'
Jan 27 22:26:29 compute-0 sudo[90450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:29 compute-0 python3.9[90452]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:29 compute-0 sudo[90450]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:29 compute-0 sudo[90575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evdkpqhwzccbusiogtngihibxaddolfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552788.8257444-167-229190747992448/AnsiballZ_copy.py'
Jan 27 22:26:29 compute-0 sudo[90575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:29 compute-0 python3.9[90577]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552788.8257444-167-229190747992448/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:29 compute-0 sudo[90575]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:30 compute-0 sudo[90727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyfvzxlrcucaokyrsxvmyvaccfofnetz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552790.0070956-182-35391502511088/AnsiballZ_stat.py'
Jan 27 22:26:30 compute-0 sudo[90727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:30 compute-0 python3.9[90729]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:30 compute-0 sudo[90727]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:30 compute-0 sudo[90852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlzvgfvgjsuyobwgsbsrvrnmzeptsuww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552790.0070956-182-35391502511088/AnsiballZ_copy.py'
Jan 27 22:26:30 compute-0 sudo[90852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:31 compute-0 python3.9[90854]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552790.0070956-182-35391502511088/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:31 compute-0 sudo[90852]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:31 compute-0 sudo[91004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfbjveromfqpciwqfnohpuucmmoiljcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552791.3357089-197-221211128005371/AnsiballZ_stat.py'
Jan 27 22:26:31 compute-0 sudo[91004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:31 compute-0 python3.9[91006]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:31 compute-0 sudo[91004]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:32 compute-0 sudo[91129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psadyisoyfsbgiwaimyothbxirqkpvll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552791.3357089-197-221211128005371/AnsiballZ_copy.py'
Jan 27 22:26:32 compute-0 sudo[91129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:32 compute-0 python3.9[91131]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552791.3357089-197-221211128005371/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:32 compute-0 sudo[91129]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:32 compute-0 sudo[91281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdxivcivlnbqqqovdlebdykbbqaxwmpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552792.6075742-212-97870217292771/AnsiballZ_stat.py'
Jan 27 22:26:32 compute-0 sudo[91281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:33 compute-0 python3.9[91283]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:33 compute-0 sudo[91281]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:33 compute-0 sudo[91406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgicvryxeksojcnjqstaijgnrekgyvge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552792.6075742-212-97870217292771/AnsiballZ_copy.py'
Jan 27 22:26:33 compute-0 sudo[91406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:33 compute-0 python3.9[91408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769552792.6075742-212-97870217292771/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:33 compute-0 sudo[91406]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:34 compute-0 sudo[91558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tifrkttohibofeosebruayetkkdxbviz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552793.7913065-227-78474498336717/AnsiballZ_file.py'
Jan 27 22:26:34 compute-0 sudo[91558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:34 compute-0 python3.9[91560]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:34 compute-0 sudo[91558]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:34 compute-0 sudo[91710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjqjaxujmfnmxqokkngmkkruvqkievjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552794.3884883-235-158615650959864/AnsiballZ_command.py'
Jan 27 22:26:34 compute-0 sudo[91710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:34 compute-0 python3.9[91712]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:26:34 compute-0 sudo[91710]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:35 compute-0 sudo[91865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zypapiramxanasvqdllwcrcqdhlzclfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552794.9649038-243-24874363758678/AnsiballZ_blockinfile.py'
Jan 27 22:26:35 compute-0 sudo[91865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:35 compute-0 python3.9[91867]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:35 compute-0 sudo[91865]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:36 compute-0 sudo[92018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubhduyyyexjrqnrhgdhkaqfkezqydjvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552795.7770977-252-211760470513943/AnsiballZ_command.py'
Jan 27 22:26:36 compute-0 sudo[92018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:36 compute-0 python3.9[92020]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:26:36 compute-0 sudo[92018]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:36 compute-0 sudo[92171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkktulyrqqyodfoygyhfuwgdfdkedbje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552796.4112375-260-81012870308145/AnsiballZ_stat.py'
Jan 27 22:26:36 compute-0 sudo[92171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:36 compute-0 python3.9[92173]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:26:37 compute-0 sudo[92171]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:37 compute-0 sudo[92325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqtuhrczyyekofqbynjsjgajgfisuzch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552797.15112-268-53590656099341/AnsiballZ_command.py'
Jan 27 22:26:37 compute-0 sudo[92325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:37 compute-0 python3.9[92327]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:26:37 compute-0 sudo[92325]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:38 compute-0 sudo[92480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usfnqprftubuvmvnhigimvzdawdqmggr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552797.8112192-276-196345787409154/AnsiballZ_file.py'
Jan 27 22:26:38 compute-0 sudo[92480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:38 compute-0 python3.9[92482]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:38 compute-0 sudo[92480]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:39 compute-0 python3.9[92632]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:26:40 compute-0 sudo[92783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tujewmzzjyebseouiepegqllinutpkjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552800.079145-316-165226091656295/AnsiballZ_command.py'
Jan 27 22:26:40 compute-0 sudo[92783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:40 compute-0 python3.9[92785]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:26:40 compute-0 ovs-vsctl[92786]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 27 22:26:40 compute-0 sudo[92783]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:41 compute-0 sudo[92936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuqmtkiuzndcfhcxnaevmxfouphueovq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552800.8691945-325-10276351191630/AnsiballZ_command.py'
Jan 27 22:26:41 compute-0 sudo[92936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:41 compute-0 python3.9[92938]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:26:41 compute-0 sudo[92936]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:41 compute-0 sudo[93091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tadiiwuddepbqdrehyxpzvohxgddwjze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552801.5250175-333-61583945248860/AnsiballZ_command.py'
Jan 27 22:26:41 compute-0 sudo[93091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:42 compute-0 python3.9[93093]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:26:42 compute-0 ovs-vsctl[93094]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 27 22:26:42 compute-0 sudo[93091]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:42 compute-0 python3.9[93244]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:26:43 compute-0 sudo[93396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evhttnnhlmvuyoghduxlbrvfccswilgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552803.0266798-350-151537869438882/AnsiballZ_file.py'
Jan 27 22:26:43 compute-0 sudo[93396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:43 compute-0 python3.9[93398]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:26:43 compute-0 sudo[93396]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:44 compute-0 sudo[93548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blmryixvowdwqqnojmrjpexqjwdeunbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552803.6964998-358-159118690435167/AnsiballZ_stat.py'
Jan 27 22:26:44 compute-0 sudo[93548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:44 compute-0 python3.9[93550]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:44 compute-0 sudo[93548]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:44 compute-0 sudo[93626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehvntymvprdbxjwnangzcvmftyakexmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552803.6964998-358-159118690435167/AnsiballZ_file.py'
Jan 27 22:26:44 compute-0 sudo[93626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:44 compute-0 python3.9[93628]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:26:44 compute-0 sudo[93626]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:45 compute-0 sudo[93778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhvqfqtsgbyzdgoqhnmhnzcfmkqwobph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552804.967474-358-199263451960451/AnsiballZ_stat.py'
Jan 27 22:26:45 compute-0 sudo[93778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:45 compute-0 python3.9[93780]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:45 compute-0 sudo[93778]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:45 compute-0 sudo[93856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccuslvkuiiwytizwamjxtozgpdvycwaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552804.967474-358-199263451960451/AnsiballZ_file.py'
Jan 27 22:26:45 compute-0 sudo[93856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:45 compute-0 python3.9[93858]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:26:45 compute-0 sudo[93856]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:46 compute-0 sudo[94008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lalsvxzdxoivamwgjqwxcgsqglkvyflv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552806.0128844-381-2800650719174/AnsiballZ_file.py'
Jan 27 22:26:46 compute-0 sudo[94008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:46 compute-0 python3.9[94010]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:46 compute-0 sudo[94008]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:46 compute-0 sudo[94160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmfhbebsavllxlyhhzgihybboehhbbln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552806.613616-389-12053346616508/AnsiballZ_stat.py'
Jan 27 22:26:46 compute-0 sudo[94160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:47 compute-0 python3.9[94162]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:47 compute-0 sudo[94160]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:47 compute-0 sudo[94239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geehxqpfavmrhytrzpmdbcjuaabbamll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552806.613616-389-12053346616508/AnsiballZ_file.py'
Jan 27 22:26:47 compute-0 sudo[94239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:47 compute-0 python3.9[94241]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:47 compute-0 sudo[94239]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:47 compute-0 sudo[94391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khwtpemgfefhvcsozfdygvglwyyuulya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552807.6918168-401-195860896995684/AnsiballZ_stat.py'
Jan 27 22:26:47 compute-0 sudo[94391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:48 compute-0 python3.9[94393]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:48 compute-0 sudo[94391]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:48 compute-0 sudo[94469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byudpmgmofippobwfjbndsrnfuovjwtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552807.6918168-401-195860896995684/AnsiballZ_file.py'
Jan 27 22:26:48 compute-0 sudo[94469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:48 compute-0 python3.9[94471]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:48 compute-0 sudo[94469]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:49 compute-0 sudo[94621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srwmwxecmoryeygxfrazczzqzwbarfvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552808.785617-413-58793523915652/AnsiballZ_systemd.py'
Jan 27 22:26:49 compute-0 sudo[94621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:49 compute-0 python3.9[94623]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:26:49 compute-0 systemd[1]: Reloading.
Jan 27 22:26:49 compute-0 systemd-rc-local-generator[94644]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:26:49 compute-0 systemd-sysv-generator[94652]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:26:49 compute-0 sudo[94621]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:50 compute-0 sudo[94811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkqblizrgsialqlyudyulgecgjyfdekj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552809.9577954-421-181018759516043/AnsiballZ_stat.py'
Jan 27 22:26:50 compute-0 sudo[94811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:50 compute-0 python3.9[94813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:50 compute-0 sudo[94811]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:50 compute-0 sudo[94889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrcobeexnigbdahqmucxezfjibvtkzpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552809.9577954-421-181018759516043/AnsiballZ_file.py'
Jan 27 22:26:50 compute-0 sudo[94889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:50 compute-0 python3.9[94891]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:50 compute-0 sudo[94889]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:51 compute-0 sudo[95041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbytvgfngzuxunrovcqkxlnbsgczflwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552811.009334-433-229838917842113/AnsiballZ_stat.py'
Jan 27 22:26:51 compute-0 sudo[95041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:51 compute-0 python3.9[95043]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:51 compute-0 sudo[95041]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:51 compute-0 sudo[95119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bguhlftllindgeowkbohwuwluqtesedb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552811.009334-433-229838917842113/AnsiballZ_file.py'
Jan 27 22:26:51 compute-0 sudo[95119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:51 compute-0 python3.9[95121]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:51 compute-0 sudo[95119]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:52 compute-0 sudo[95271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkblfcfhrydykvbbrrwkhlyrrynsmomn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552812.0693114-445-217043759513815/AnsiballZ_systemd.py'
Jan 27 22:26:52 compute-0 sudo[95271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:52 compute-0 python3.9[95273]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:26:52 compute-0 systemd[1]: Reloading.
Jan 27 22:26:52 compute-0 systemd-rc-local-generator[95300]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:26:52 compute-0 systemd-sysv-generator[95304]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:26:52 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 22:26:52 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 22:26:52 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 22:26:52 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 22:26:52 compute-0 sudo[95271]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:53 compute-0 sudo[95464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuhtavgoglzlljwtboolpfifvpctdgja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552813.226037-455-251378369519573/AnsiballZ_file.py'
Jan 27 22:26:53 compute-0 sudo[95464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:53 compute-0 python3.9[95466]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:26:53 compute-0 sudo[95464]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:54 compute-0 sudo[95616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nunkhxxgbilsjmvdzxdcjkiltqslpqyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552813.8173566-463-108015773580439/AnsiballZ_stat.py'
Jan 27 22:26:54 compute-0 sudo[95616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:54 compute-0 python3.9[95618]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:54 compute-0 sudo[95616]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:54 compute-0 sudo[95739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbenahfenesvclwagptgofawqkojxkru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552813.8173566-463-108015773580439/AnsiballZ_copy.py'
Jan 27 22:26:54 compute-0 sudo[95739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:54 compute-0 python3.9[95741]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552813.8173566-463-108015773580439/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:26:54 compute-0 sudo[95739]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:55 compute-0 sudo[95891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckghkundmqbuyxrmotooggyoxfnzklxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552815.4213028-480-198166872888420/AnsiballZ_file.py'
Jan 27 22:26:55 compute-0 sudo[95891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:55 compute-0 python3.9[95893]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:55 compute-0 sudo[95891]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:56 compute-0 sudo[96043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffppwjqqdskhtsebyostahvkolaomuxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552816.0785472-488-124606946241684/AnsiballZ_file.py'
Jan 27 22:26:56 compute-0 sudo[96043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:56 compute-0 python3.9[96045]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:26:56 compute-0 sudo[96043]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:57 compute-0 sudo[96195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryhdkejjoareciiitfberletqqvcficm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552816.7599602-496-110803138352425/AnsiballZ_stat.py'
Jan 27 22:26:57 compute-0 sudo[96195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:57 compute-0 python3.9[96197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:26:57 compute-0 sudo[96195]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:57 compute-0 sudo[96318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdxqhmzcbgzfsnjghltgkfytwgpnqivm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552816.7599602-496-110803138352425/AnsiballZ_copy.py'
Jan 27 22:26:57 compute-0 sudo[96318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:26:57 compute-0 python3.9[96320]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552816.7599602-496-110803138352425/.source.json _original_basename=.vp6xs0iw follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:26:57 compute-0 sudo[96318]: pam_unix(sudo:session): session closed for user root
Jan 27 22:26:58 compute-0 python3.9[96470]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:00 compute-0 sudo[96891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgaflboznoczgawpetrfpzfspvfkdoex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552820.1440463-536-5432771989145/AnsiballZ_container_config_data.py'
Jan 27 22:27:00 compute-0 sudo[96891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:00 compute-0 python3.9[96893]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 27 22:27:00 compute-0 sudo[96891]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:01 compute-0 anacron[31078]: Job `cron.weekly' started
Jan 27 22:27:01 compute-0 anacron[31078]: Job `cron.weekly' terminated
Jan 27 22:27:01 compute-0 sudo[97045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eihgwrehzhiqjeohqbsfomzpjvzgzhsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552821.1102953-547-143234670015631/AnsiballZ_container_config_hash.py'
Jan 27 22:27:01 compute-0 sudo[97045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:01 compute-0 python3.9[97047]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 22:27:01 compute-0 sudo[97045]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:02 compute-0 sudo[97197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tosvktgbosywbjbfveipvfhhrsyxfzkw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769552822.0311337-557-126665128877994/AnsiballZ_edpm_container_manage.py'
Jan 27 22:27:02 compute-0 sudo[97197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:02 compute-0 python3[97199]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 22:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:27:02 compute-0 podman[97234]: 2026-01-27 22:27:02.838203535 +0000 UTC m=+0.042560203 container create 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:27:02 compute-0 podman[97234]: 2026-01-27 22:27:02.816039501 +0000 UTC m=+0.020396199 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 22:27:02 compute-0 python3[97199]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 27 22:27:02 compute-0 sudo[97197]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:03 compute-0 sudo[97422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myjzojtoisnoboguicldablxwfmagnmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552823.0916812-565-279869562494734/AnsiballZ_stat.py'
Jan 27 22:27:03 compute-0 sudo[97422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:03 compute-0 python3.9[97424]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:27:03 compute-0 sudo[97422]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 27 22:27:04 compute-0 sudo[97576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnvqlusrlutindqxtixzjvilbxzyjzrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552823.8026383-574-54065529625514/AnsiballZ_file.py'
Jan 27 22:27:04 compute-0 sudo[97576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:04 compute-0 python3.9[97578]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:04 compute-0 sudo[97576]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:04 compute-0 sudo[97652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zowannjtgrlqznanssganvdtehknwklq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552823.8026383-574-54065529625514/AnsiballZ_stat.py'
Jan 27 22:27:04 compute-0 sudo[97652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:04 compute-0 python3.9[97654]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:27:04 compute-0 sudo[97652]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:05 compute-0 sudo[97803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdkzmkucecjturodoeazwuzypmxcmsbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552824.7698033-574-143587236956696/AnsiballZ_copy.py'
Jan 27 22:27:05 compute-0 sudo[97803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:05 compute-0 python3.9[97805]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769552824.7698033-574-143587236956696/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:05 compute-0 sudo[97803]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:05 compute-0 sudo[97879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upgmeuuoqrpmocqcuxolpyqbdjpyoizv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552824.7698033-574-143587236956696/AnsiballZ_systemd.py'
Jan 27 22:27:05 compute-0 sudo[97879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:05 compute-0 python3.9[97881]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:27:05 compute-0 systemd[1]: Reloading.
Jan 27 22:27:06 compute-0 systemd-rc-local-generator[97906]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:27:06 compute-0 systemd-sysv-generator[97910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:27:06 compute-0 sudo[97879]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:06 compute-0 sudo[97990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alobyxgmcsdrpvxgnoylxmnmrkvbjcnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552824.7698033-574-143587236956696/AnsiballZ_systemd.py'
Jan 27 22:27:06 compute-0 sudo[97990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:06 compute-0 python3.9[97992]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:27:07 compute-0 systemd[1]: Reloading.
Jan 27 22:27:07 compute-0 systemd-sysv-generator[98023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:27:07 compute-0 systemd-rc-local-generator[98019]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:27:08 compute-0 systemd[1]: Starting ovn_controller container...
Jan 27 22:27:08 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 27 22:27:08 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc663bb762e0fa8cad70cdac1cc2ecb0fb1e7b5758d9b6a07ceef5425cf232b/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 27 22:27:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16.
Jan 27 22:27:08 compute-0 podman[98034]: 2026-01-27 22:27:08.175594982 +0000 UTC m=+0.124398531 container init 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:27:08 compute-0 ovn_controller[98048]: + sudo -E kolla_set_configs
Jan 27 22:27:08 compute-0 podman[98034]: 2026-01-27 22:27:08.198665515 +0000 UTC m=+0.147469034 container start 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 22:27:08 compute-0 edpm-start-podman-container[98034]: ovn_controller
Jan 27 22:27:08 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 27 22:27:08 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 27 22:27:08 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 27 22:27:08 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 27 22:27:08 compute-0 edpm-start-podman-container[98033]: Creating additional drop-in dependency for "ovn_controller" (5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16)
Jan 27 22:27:08 compute-0 systemd[98089]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 27 22:27:08 compute-0 podman[98055]: 2026-01-27 22:27:08.268870297 +0000 UTC m=+0.059564130 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:27:08 compute-0 systemd[1]: 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16-69010b106a855015.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 22:27:08 compute-0 systemd[1]: 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16-69010b106a855015.service: Failed with result 'exit-code'.
Jan 27 22:27:08 compute-0 systemd[1]: Reloading.
Jan 27 22:27:08 compute-0 systemd-rc-local-generator[98138]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:27:08 compute-0 systemd-sysv-generator[98141]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:27:08 compute-0 systemd[98089]: Queued start job for default target Main User Target.
Jan 27 22:27:08 compute-0 systemd[98089]: Created slice User Application Slice.
Jan 27 22:27:08 compute-0 systemd[98089]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 27 22:27:08 compute-0 systemd[98089]: Started Daily Cleanup of User's Temporary Directories.
Jan 27 22:27:08 compute-0 systemd[98089]: Reached target Paths.
Jan 27 22:27:08 compute-0 systemd[98089]: Reached target Timers.
Jan 27 22:27:08 compute-0 systemd[98089]: Starting D-Bus User Message Bus Socket...
Jan 27 22:27:08 compute-0 systemd[98089]: Starting Create User's Volatile Files and Directories...
Jan 27 22:27:08 compute-0 systemd[98089]: Listening on D-Bus User Message Bus Socket.
Jan 27 22:27:08 compute-0 systemd[98089]: Reached target Sockets.
Jan 27 22:27:08 compute-0 systemd[98089]: Finished Create User's Volatile Files and Directories.
Jan 27 22:27:08 compute-0 systemd[98089]: Reached target Basic System.
Jan 27 22:27:08 compute-0 systemd[98089]: Reached target Main User Target.
Jan 27 22:27:08 compute-0 systemd[98089]: Startup finished in 122ms.
Jan 27 22:27:08 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 27 22:27:08 compute-0 systemd[1]: Started ovn_controller container.
Jan 27 22:27:08 compute-0 systemd[1]: Started Session c1 of User root.
Jan 27 22:27:08 compute-0 sudo[97990]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:08 compute-0 ovn_controller[98048]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 22:27:08 compute-0 ovn_controller[98048]: INFO:__main__:Validating config file
Jan 27 22:27:08 compute-0 ovn_controller[98048]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 22:27:08 compute-0 ovn_controller[98048]: INFO:__main__:Writing out command to execute
Jan 27 22:27:08 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 27 22:27:08 compute-0 ovn_controller[98048]: ++ cat /run_command
Jan 27 22:27:08 compute-0 ovn_controller[98048]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 27 22:27:08 compute-0 ovn_controller[98048]: + ARGS=
Jan 27 22:27:08 compute-0 ovn_controller[98048]: + sudo kolla_copy_cacerts
Jan 27 22:27:08 compute-0 systemd[1]: Started Session c2 of User root.
Jan 27 22:27:08 compute-0 ovn_controller[98048]: + [[ ! -n '' ]]
Jan 27 22:27:08 compute-0 ovn_controller[98048]: + . kolla_extend_start
Jan 27 22:27:08 compute-0 ovn_controller[98048]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 27 22:27:08 compute-0 ovn_controller[98048]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 27 22:27:08 compute-0 ovn_controller[98048]: + umask 0022
Jan 27 22:27:08 compute-0 ovn_controller[98048]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 27 22:27:08 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 27 22:27:08 compute-0 NetworkManager[56600]: <info>  [1769552828.6310] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 27 22:27:08 compute-0 NetworkManager[56600]: <info>  [1769552828.6318] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:27:08 compute-0 NetworkManager[56600]: <warn>  [1769552828.6321] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 22:27:08 compute-0 NetworkManager[56600]: <info>  [1769552828.6329] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Jan 27 22:27:08 compute-0 NetworkManager[56600]: <info>  [1769552828.6335] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Jan 27 22:27:08 compute-0 NetworkManager[56600]: <info>  [1769552828.6338] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 22:27:08 compute-0 kernel: br-int: entered promiscuous mode
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00013|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00014|features|INFO|OVS Feature: ct_flush, state: supported
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00015|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00017|main|INFO|OVS feature set changed, force recompute.
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00019|main|INFO|OVS feature set changed, force recompute.
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 22:27:08 compute-0 ovn_controller[98048]: 2026-01-27T22:27:08Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 27 22:27:08 compute-0 NetworkManager[56600]: <info>  [1769552828.6482] manager: (ovn-cea898-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 27 22:27:08 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 27 22:27:08 compute-0 NetworkManager[56600]: <info>  [1769552828.6676] device (genev_sys_6081): carrier: link connected
Jan 27 22:27:08 compute-0 NetworkManager[56600]: <info>  [1769552828.6680] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Jan 27 22:27:08 compute-0 systemd-udevd[98185]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 22:27:08 compute-0 systemd-udevd[98189]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 22:27:09 compute-0 python3.9[98317]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 22:27:10 compute-0 sudo[98467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlcsncxdhlsvtvixmwtskunpkbfohcfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552829.971184-619-146337161751983/AnsiballZ_stat.py'
Jan 27 22:27:10 compute-0 sudo[98467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:10 compute-0 python3.9[98469]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:10 compute-0 sudo[98467]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:10 compute-0 sudo[98590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnqewyxiqyxavhlehryyplnzeljsvxde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552829.971184-619-146337161751983/AnsiballZ_copy.py'
Jan 27 22:27:10 compute-0 sudo[98590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:10 compute-0 python3.9[98592]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552829.971184-619-146337161751983/.source.yaml _original_basename=.rpfadmwr follow=False checksum=0278f06c0ba3338d543f32d12a96bd398de693e5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:10 compute-0 sudo[98590]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:11 compute-0 sudo[98742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oydbspvckijujxbcscfzetfpavwgwlsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552831.141772-634-21428265083785/AnsiballZ_command.py'
Jan 27 22:27:11 compute-0 sudo[98742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:11 compute-0 python3.9[98744]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:27:11 compute-0 ovs-vsctl[98745]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 27 22:27:11 compute-0 sudo[98742]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:12 compute-0 sudo[98895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpatzxjsfrvedyrfcvscvcydpswipvlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552831.8225422-642-141207389131769/AnsiballZ_command.py'
Jan 27 22:27:12 compute-0 sudo[98895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:12 compute-0 python3.9[98897]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:27:12 compute-0 ovs-vsctl[98899]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 27 22:27:12 compute-0 sudo[98895]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:13 compute-0 sudo[99050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znexipeoowcvhspfdbhqnqqzjanykhvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552832.7763734-656-186021946084390/AnsiballZ_command.py'
Jan 27 22:27:13 compute-0 sudo[99050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:13 compute-0 python3.9[99052]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:27:13 compute-0 ovs-vsctl[99053]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 27 22:27:13 compute-0 sudo[99050]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:13 compute-0 sshd-session[87568]: Connection closed by 192.168.122.30 port 59840
Jan 27 22:27:13 compute-0 sshd-session[87565]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:27:13 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 27 22:27:13 compute-0 systemd[1]: session-19.scope: Consumed 45.732s CPU time.
Jan 27 22:27:13 compute-0 systemd-logind[789]: Session 19 logged out. Waiting for processes to exit.
Jan 27 22:27:13 compute-0 systemd-logind[789]: Removed session 19.
Jan 27 22:27:18 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 27 22:27:18 compute-0 systemd[98089]: Activating special unit Exit the Session...
Jan 27 22:27:18 compute-0 systemd[98089]: Stopped target Main User Target.
Jan 27 22:27:18 compute-0 systemd[98089]: Stopped target Basic System.
Jan 27 22:27:18 compute-0 systemd[98089]: Stopped target Paths.
Jan 27 22:27:18 compute-0 systemd[98089]: Stopped target Sockets.
Jan 27 22:27:18 compute-0 systemd[98089]: Stopped target Timers.
Jan 27 22:27:18 compute-0 systemd[98089]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 27 22:27:18 compute-0 systemd[98089]: Closed D-Bus User Message Bus Socket.
Jan 27 22:27:18 compute-0 systemd[98089]: Stopped Create User's Volatile Files and Directories.
Jan 27 22:27:18 compute-0 systemd[98089]: Removed slice User Application Slice.
Jan 27 22:27:18 compute-0 systemd[98089]: Reached target Shutdown.
Jan 27 22:27:18 compute-0 systemd[98089]: Finished Exit the Session.
Jan 27 22:27:18 compute-0 systemd[98089]: Reached target Exit the Session.
Jan 27 22:27:18 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 27 22:27:18 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 27 22:27:18 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 27 22:27:18 compute-0 sshd-session[99078]: Accepted publickey for zuul from 192.168.122.30 port 42110 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:27:18 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 27 22:27:18 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 27 22:27:18 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 27 22:27:18 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 27 22:27:18 compute-0 systemd-logind[789]: New session 21 of user zuul.
Jan 27 22:27:18 compute-0 systemd[1]: Started Session 21 of User zuul.
Jan 27 22:27:18 compute-0 sshd-session[99078]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:27:19 compute-0 python3.9[99233]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:27:20 compute-0 sudo[99387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiatqekkrvvkonppfuaztowpxfvvhbza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552840.2650025-29-149648475969762/AnsiballZ_file.py'
Jan 27 22:27:20 compute-0 sudo[99387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:20 compute-0 python3.9[99389]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:20 compute-0 sudo[99387]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:21 compute-0 sudo[99539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdklpmiemavmvrontyfytbgqethhljel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552841.0686207-29-62934755296237/AnsiballZ_file.py'
Jan 27 22:27:21 compute-0 sudo[99539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:21 compute-0 python3.9[99541]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:21 compute-0 sudo[99539]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:21 compute-0 sudo[99691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvbkehiczfifqnsnyheiqpkhspzrrzgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552841.6831365-29-9543670177946/AnsiballZ_file.py'
Jan 27 22:27:21 compute-0 sudo[99691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:22 compute-0 python3.9[99693]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:22 compute-0 sudo[99691]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:22 compute-0 sudo[99843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ashxysboqovhxrvwyyfxcffspjgcvlbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552842.290642-29-47786773387339/AnsiballZ_file.py'
Jan 27 22:27:22 compute-0 sudo[99843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:22 compute-0 python3.9[99845]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:22 compute-0 sudo[99843]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:23 compute-0 sudo[99995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dosjzjysiujlbxkeqfzxplpyxnwmphsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552843.064772-29-75519196717509/AnsiballZ_file.py'
Jan 27 22:27:23 compute-0 sudo[99995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:23 compute-0 python3.9[99997]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:23 compute-0 sudo[99995]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:24 compute-0 python3.9[100147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:27:24 compute-0 sudo[100297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anqctnqnybaypbjgprscraodcmwnknss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552844.5257926-73-121550751911487/AnsiballZ_seboolean.py'
Jan 27 22:27:24 compute-0 sudo[100297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:25 compute-0 python3.9[100299]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 27 22:27:25 compute-0 sudo[100297]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:26 compute-0 python3.9[100449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:27 compute-0 python3.9[100571]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552845.9378047-81-3341066113546/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:27 compute-0 python3.9[100721]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:28 compute-0 python3.9[100842]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552847.2899451-96-95473967305666/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:28 compute-0 sudo[100992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlenbqoijzvrgmgzzmuhwdzjjxoivips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552848.4785225-113-219445447004196/AnsiballZ_setup.py'
Jan 27 22:27:28 compute-0 sudo[100992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:29 compute-0 python3.9[100994]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:27:29 compute-0 sudo[100992]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:29 compute-0 sudo[101076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vedswvhxdxdmpgnttuvakffneszpvtys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552848.4785225-113-219445447004196/AnsiballZ_dnf.py'
Jan 27 22:27:29 compute-0 sudo[101076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:29 compute-0 python3.9[101078]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:27:31 compute-0 sudo[101076]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:31 compute-0 sudo[101229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpyiiccnawjohgklynikjfibttnhxqii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552851.354767-125-250545280723930/AnsiballZ_systemd.py'
Jan 27 22:27:31 compute-0 sudo[101229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:32 compute-0 python3.9[101231]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 22:27:32 compute-0 sudo[101229]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:32 compute-0 python3.9[101384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:33 compute-0 python3.9[101505]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552852.4912617-133-226389209631658/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:34 compute-0 python3.9[101655]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:34 compute-0 python3.9[101776]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552853.5053117-133-6248304954121/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:35 compute-0 python3.9[101926]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:36 compute-0 python3.9[102047]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552855.1843514-177-177025132255346/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:36 compute-0 python3.9[102197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:37 compute-0 python3.9[102318]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552856.206914-177-157007264603794/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:37 compute-0 python3.9[102468]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:27:38 compute-0 sudo[102635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zosekpnhuxbcnuhvzawfbweqkobgytjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552858.0817914-215-90171144333951/AnsiballZ_file.py'
Jan 27 22:27:38 compute-0 sudo[102635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:38 compute-0 ovn_controller[98048]: 2026-01-27T22:27:38Z|00025|memory|INFO|16000 kB peak resident set size after 29.8 seconds
Jan 27 22:27:38 compute-0 ovn_controller[98048]: 2026-01-27T22:27:38Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 27 22:27:38 compute-0 podman[102594]: 2026-01-27 22:27:38.474048115 +0000 UTC m=+0.130000760 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 22:27:38 compute-0 python3.9[102641]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:38 compute-0 sudo[102635]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:39 compute-0 sudo[102799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyjjenjkrcptpscrbcxqyyxpdndmflsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552858.7492545-223-261133795850051/AnsiballZ_stat.py'
Jan 27 22:27:39 compute-0 sudo[102799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:39 compute-0 python3.9[102801]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:39 compute-0 sudo[102799]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:39 compute-0 sudo[102877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hurpbgbtysbvukbmneqgnfimbrkdzyel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552858.7492545-223-261133795850051/AnsiballZ_file.py'
Jan 27 22:27:39 compute-0 sudo[102877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:39 compute-0 python3.9[102879]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:39 compute-0 sudo[102877]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:39 compute-0 sudo[103029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmbcvvreadzhnhsreqsymqiqatqoultc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552859.767208-223-268995632999252/AnsiballZ_stat.py'
Jan 27 22:27:39 compute-0 sudo[103029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:40 compute-0 python3.9[103031]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:40 compute-0 sudo[103029]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:40 compute-0 sudo[103107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjtsimchhlxfdopzsocfsexxxbxodoix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552859.767208-223-268995632999252/AnsiballZ_file.py'
Jan 27 22:27:40 compute-0 sudo[103107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:40 compute-0 python3.9[103109]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:40 compute-0 sudo[103107]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:40 compute-0 sudo[103259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdgnmuwmljxkphgkgqerjrffomnjvjyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552860.7412012-246-97685685520674/AnsiballZ_file.py'
Jan 27 22:27:40 compute-0 sudo[103259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:41 compute-0 python3.9[103261]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:41 compute-0 sudo[103259]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:41 compute-0 sudo[103411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjafvixemjjlghnchrgasktdfxlukiwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552861.3334963-254-200233681664331/AnsiballZ_stat.py'
Jan 27 22:27:41 compute-0 sudo[103411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:41 compute-0 python3.9[103413]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:41 compute-0 sudo[103411]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:42 compute-0 sudo[103489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpezelqxiwuqlnxlaahgtmzehnwemdsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552861.3334963-254-200233681664331/AnsiballZ_file.py'
Jan 27 22:27:42 compute-0 sudo[103489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:42 compute-0 python3.9[103491]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:42 compute-0 sudo[103489]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:42 compute-0 sudo[103641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsqmtgvozmssuxqitjpgewylayundxti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552862.3934216-266-8458147772979/AnsiballZ_stat.py'
Jan 27 22:27:42 compute-0 sudo[103641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:42 compute-0 python3.9[103643]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:42 compute-0 sudo[103641]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:43 compute-0 sudo[103719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpowvpvowdphmxeexltocnbuucuzyrfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552862.3934216-266-8458147772979/AnsiballZ_file.py'
Jan 27 22:27:43 compute-0 sudo[103719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:43 compute-0 python3.9[103721]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:43 compute-0 sudo[103719]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:43 compute-0 sudo[103871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlbflygxkezogmzetiamflpbiqgcdjez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552863.5991223-278-261811966105213/AnsiballZ_systemd.py'
Jan 27 22:27:43 compute-0 sudo[103871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:44 compute-0 python3.9[103873]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:27:44 compute-0 systemd[1]: Reloading.
Jan 27 22:27:44 compute-0 systemd-sysv-generator[103905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:27:44 compute-0 systemd-rc-local-generator[103901]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:27:44 compute-0 sudo[103871]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:44 compute-0 sudo[104060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luwruknlizcvlmdmnqpdyrfvxlnzonkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552864.6135798-286-138356129476775/AnsiballZ_stat.py'
Jan 27 22:27:44 compute-0 sudo[104060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:45 compute-0 python3.9[104062]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:45 compute-0 sudo[104060]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:45 compute-0 sudo[104138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umwoamqxtmvzwhgzennixywrtpzexwbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552864.6135798-286-138356129476775/AnsiballZ_file.py'
Jan 27 22:27:45 compute-0 sudo[104138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:45 compute-0 python3.9[104140]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:45 compute-0 sudo[104138]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:46 compute-0 sudo[104290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dudlkdcpdejgpxglxbkjpxjebkdvyxws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552865.7465148-298-92842712411215/AnsiballZ_stat.py'
Jan 27 22:27:46 compute-0 sudo[104290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:46 compute-0 python3.9[104292]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:46 compute-0 sudo[104290]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:46 compute-0 sudo[104368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pndtvfsmoajgdlsdzobhjzqztomgyeyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552865.7465148-298-92842712411215/AnsiballZ_file.py'
Jan 27 22:27:46 compute-0 sudo[104368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:46 compute-0 python3.9[104370]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:46 compute-0 sudo[104368]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:47 compute-0 sudo[104520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfqkiqdczvsgjnqizbzphqkjkdwhsoge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552866.9073293-310-36467382243588/AnsiballZ_systemd.py'
Jan 27 22:27:47 compute-0 sudo[104520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:47 compute-0 python3.9[104522]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:27:47 compute-0 systemd[1]: Reloading.
Jan 27 22:27:47 compute-0 systemd-rc-local-generator[104547]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:27:47 compute-0 systemd-sysv-generator[104550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:27:47 compute-0 systemd[1]: Starting Create netns directory...
Jan 27 22:27:47 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 27 22:27:47 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 27 22:27:47 compute-0 systemd[1]: Finished Create netns directory.
Jan 27 22:27:47 compute-0 sudo[104520]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:48 compute-0 sudo[104713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwhwqbcfqgfuaivgxveffmshrojqxsmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552868.2387578-320-269137009512905/AnsiballZ_file.py'
Jan 27 22:27:48 compute-0 sudo[104713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:48 compute-0 python3.9[104715]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:48 compute-0 sudo[104713]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:49 compute-0 sudo[104865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjsnpjrdprkqeljmcawgjeeplyvyyxtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552868.8584285-328-8388446320554/AnsiballZ_stat.py'
Jan 27 22:27:49 compute-0 sudo[104865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:49 compute-0 python3.9[104867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:49 compute-0 sudo[104865]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:49 compute-0 sudo[104988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csxdykfyzhiwiqocznrjnddemvwmujqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552868.8584285-328-8388446320554/AnsiballZ_copy.py'
Jan 27 22:27:49 compute-0 sudo[104988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:49 compute-0 python3.9[104990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769552868.8584285-328-8388446320554/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:49 compute-0 sudo[104988]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:50 compute-0 sudo[105140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkuodketoqgphudrgwrhowyvwnasswdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552870.165348-345-64962914763067/AnsiballZ_file.py'
Jan 27 22:27:50 compute-0 sudo[105140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:50 compute-0 python3.9[105142]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:50 compute-0 sudo[105140]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:51 compute-0 sudo[105292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auglnwqtzkdrxtbadfhbwrxqtytlblws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552870.7939444-353-240698701206812/AnsiballZ_file.py'
Jan 27 22:27:51 compute-0 sudo[105292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:51 compute-0 python3.9[105294]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:27:51 compute-0 sudo[105292]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:51 compute-0 sudo[105444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhzpwhftwsaudutmwlpxowykivxewmgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552871.493678-361-21665301342256/AnsiballZ_stat.py'
Jan 27 22:27:51 compute-0 sudo[105444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:51 compute-0 python3.9[105446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:27:51 compute-0 sudo[105444]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:52 compute-0 sudo[105567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcqjoegaeitguqenpmyccptnlliwzgkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552871.493678-361-21665301342256/AnsiballZ_copy.py'
Jan 27 22:27:52 compute-0 sudo[105567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:52 compute-0 python3.9[105569]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552871.493678-361-21665301342256/.source.json _original_basename=.xjzz4fel follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:52 compute-0 sudo[105567]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:53 compute-0 python3.9[105719]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:55 compute-0 sudo[106140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dljvdlkwagbpjodqwrdgzkpnslztjubv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552874.7566848-401-166971939648046/AnsiballZ_container_config_data.py'
Jan 27 22:27:55 compute-0 sudo[106140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:55 compute-0 python3.9[106142]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 27 22:27:55 compute-0 sudo[106140]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:56 compute-0 sudo[106292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lglmolymoyodfheotfgaydfjyluhbgoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552875.6407864-412-75310470301718/AnsiballZ_container_config_hash.py'
Jan 27 22:27:56 compute-0 sudo[106292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:56 compute-0 python3.9[106294]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 22:27:56 compute-0 sudo[106292]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:57 compute-0 sudo[106445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkcgaxbexzaefjrkfchioswiltefjvkv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769552876.8624191-422-26703135528993/AnsiballZ_edpm_container_manage.py'
Jan 27 22:27:57 compute-0 sudo[106445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:57 compute-0 python3[106447]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 22:27:57 compute-0 podman[106484]: 2026-01-27 22:27:57.918337186 +0000 UTC m=+0.068271014 container create 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 27 22:27:57 compute-0 podman[106484]: 2026-01-27 22:27:57.887453427 +0000 UTC m=+0.037387235 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 22:27:57 compute-0 python3[106447]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 22:27:58 compute-0 sudo[106445]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:58 compute-0 sudo[106671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxcavhmhveytyoygeoltwoyvxgwjaere ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552878.2630513-430-266766180736636/AnsiballZ_stat.py'
Jan 27 22:27:58 compute-0 sudo[106671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:58 compute-0 python3.9[106673]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:27:58 compute-0 sudo[106671]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:59 compute-0 sudo[106825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzbxfctldrykhcdsvunzrqssbqumbvql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552878.9467983-439-192927891527451/AnsiballZ_file.py'
Jan 27 22:27:59 compute-0 sudo[106825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:59 compute-0 python3.9[106827]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:27:59 compute-0 sudo[106825]: pam_unix(sudo:session): session closed for user root
Jan 27 22:27:59 compute-0 sudo[106901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzhekbrfjfgfflwewzypcdjzddnuyjuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552878.9467983-439-192927891527451/AnsiballZ_stat.py'
Jan 27 22:27:59 compute-0 sudo[106901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:27:59 compute-0 python3.9[106903]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:27:59 compute-0 sudo[106901]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:00 compute-0 sudo[107052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bopaumtsejgtmqdxmavbfwflvikviuku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552879.832489-439-81254627567937/AnsiballZ_copy.py'
Jan 27 22:28:00 compute-0 sudo[107052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:00 compute-0 python3.9[107054]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769552879.832489-439-81254627567937/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:00 compute-0 sudo[107052]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:00 compute-0 sudo[107128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgnogharrakkwyurcuapilifhgnnxfcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552879.832489-439-81254627567937/AnsiballZ_systemd.py'
Jan 27 22:28:00 compute-0 sudo[107128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:00 compute-0 python3.9[107130]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:28:00 compute-0 systemd[1]: Reloading.
Jan 27 22:28:01 compute-0 systemd-rc-local-generator[107151]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:28:01 compute-0 systemd-sysv-generator[107157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:28:01 compute-0 sudo[107128]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:01 compute-0 sudo[107239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulmixnpgppyzjbckdcerixwsplfwwasf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552879.832489-439-81254627567937/AnsiballZ_systemd.py'
Jan 27 22:28:01 compute-0 sudo[107239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:01 compute-0 python3.9[107241]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:28:01 compute-0 systemd[1]: Reloading.
Jan 27 22:28:01 compute-0 systemd-rc-local-generator[107274]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:28:01 compute-0 systemd-sysv-generator[107277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:28:02 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 27 22:28:02 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ec54dfc2e0cbcc5a2e78c6b6b50326331c81b38858152a60eefdcd352ee6146/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 27 22:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ec54dfc2e0cbcc5a2e78c6b6b50326331c81b38858152a60eefdcd352ee6146/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 22:28:02 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc.
Jan 27 22:28:02 compute-0 podman[107282]: 2026-01-27 22:28:02.26494941 +0000 UTC m=+0.124282885 container init 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: + sudo -E kolla_set_configs
Jan 27 22:28:02 compute-0 podman[107282]: 2026-01-27 22:28:02.284029563 +0000 UTC m=+0.143363028 container start 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:28:02 compute-0 edpm-start-podman-container[107282]: ovn_metadata_agent
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Validating config file
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Copying service configuration files
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Writing out command to execute
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 27 22:28:02 compute-0 edpm-start-podman-container[107281]: Creating additional drop-in dependency for "ovn_metadata_agent" (70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc)
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: ++ cat /run_command
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: + CMD=neutron-ovn-metadata-agent
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: + ARGS=
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: + sudo kolla_copy_cacerts
Jan 27 22:28:02 compute-0 podman[107304]: 2026-01-27 22:28:02.356028265 +0000 UTC m=+0.062702805 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 22:28:02 compute-0 systemd[1]: Reloading.
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: Running command: 'neutron-ovn-metadata-agent'
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: + [[ ! -n '' ]]
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: + . kolla_extend_start
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: + umask 0022
Jan 27 22:28:02 compute-0 ovn_metadata_agent[107297]: + exec neutron-ovn-metadata-agent
Jan 27 22:28:02 compute-0 systemd-rc-local-generator[107372]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:28:02 compute-0 systemd-sysv-generator[107375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:28:02 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 27 22:28:02 compute-0 sudo[107239]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:03 compute-0 python3.9[107534]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.079 107302 INFO neutron.common.config [-] Logging enabled!
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.080 107302 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.080 107302 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.080 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.080 107302 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.080 107302 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.081 107302 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.081 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.081 107302 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.081 107302 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.081 107302 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.081 107302 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.081 107302 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.081 107302 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.082 107302 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.082 107302 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.082 107302 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.082 107302 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.082 107302 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.082 107302 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.082 107302 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.082 107302 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.082 107302 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.082 107302 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.083 107302 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.083 107302 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.083 107302 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.083 107302 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.083 107302 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.083 107302 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.083 107302 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.083 107302 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.083 107302 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.084 107302 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.084 107302 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.084 107302 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.084 107302 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.084 107302 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.084 107302 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.084 107302 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.085 107302 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.085 107302 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.085 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.085 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.085 107302 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.085 107302 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.085 107302 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.085 107302 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.085 107302 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.085 107302 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.086 107302 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.086 107302 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.086 107302 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.086 107302 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.086 107302 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.086 107302 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.086 107302 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.086 107302 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.086 107302 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.086 107302 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.087 107302 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.087 107302 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.087 107302 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.087 107302 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.087 107302 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.087 107302 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.087 107302 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.087 107302 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.087 107302 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.088 107302 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.088 107302 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.088 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.088 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.088 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.088 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.088 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.088 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.088 107302 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.089 107302 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.089 107302 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.089 107302 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.089 107302 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.089 107302 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.089 107302 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.089 107302 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.089 107302 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.089 107302 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.089 107302 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.090 107302 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.090 107302 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.090 107302 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.090 107302 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.090 107302 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.090 107302 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.090 107302 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.090 107302 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.090 107302 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.090 107302 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.091 107302 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.091 107302 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.091 107302 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.091 107302 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.091 107302 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.091 107302 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.091 107302 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.091 107302 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.091 107302 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.091 107302 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.092 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.092 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.092 107302 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.092 107302 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.092 107302 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.092 107302 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.092 107302 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.092 107302 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.092 107302 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.093 107302 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.093 107302 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.093 107302 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.093 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.093 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.093 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.093 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.093 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.093 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.094 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.094 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.094 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.094 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.094 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.094 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.094 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.094 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.094 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.094 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.095 107302 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.095 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.095 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.095 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.095 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.095 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.095 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.095 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.095 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.096 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.096 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.096 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.096 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.096 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.096 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.096 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.096 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.096 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.097 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.097 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.097 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.097 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.097 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.097 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.097 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.097 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.097 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.097 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.098 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.098 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.098 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.098 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.098 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.098 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.098 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.098 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.098 107302 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.099 107302 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.099 107302 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.099 107302 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.099 107302 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.099 107302 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.099 107302 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.099 107302 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.099 107302 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.099 107302 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.099 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.100 107302 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.100 107302 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.100 107302 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.100 107302 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.100 107302 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.100 107302 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.100 107302 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.100 107302 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.100 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.101 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.101 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.101 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.101 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.101 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.101 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.101 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.101 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.101 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.101 107302 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.102 107302 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.102 107302 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.102 107302 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.102 107302 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.102 107302 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.102 107302 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.102 107302 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.102 107302 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.102 107302 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.103 107302 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.103 107302 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.103 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.103 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.103 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.103 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.103 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.103 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.103 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.103 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.104 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.104 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.104 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.104 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.104 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.104 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.104 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.104 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.104 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.104 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.105 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.105 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.105 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.105 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.105 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.105 107302 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.105 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.105 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.105 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.105 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.106 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.106 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.106 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.106 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.106 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.106 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.106 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.106 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.106 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.107 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.107 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.107 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.107 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.107 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.107 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.107 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.107 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.107 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.107 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.108 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.108 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.108 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.108 107302 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.108 107302 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.108 107302 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.108 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.108 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.109 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.109 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.109 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.109 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.109 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.109 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.109 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.109 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.109 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.109 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.110 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.110 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.110 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.110 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.110 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.110 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.110 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.110 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.111 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.111 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.111 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.111 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.111 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.111 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.111 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.111 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.111 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.112 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.112 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.112 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.112 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.112 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.112 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.112 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.112 107302 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.112 107302 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.121 107302 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.121 107302 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.122 107302 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.122 107302 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.122 107302 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.133 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name e88f80e1-ee63-4bdc-95c3-ad473efb7428 (UUID: e88f80e1-ee63-4bdc-95c3-ad473efb7428) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.172 107302 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.172 107302 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.173 107302 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.173 107302 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.176 107302 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.181 107302 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.186 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'e88f80e1-ee63-4bdc-95c3-ad473efb7428'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], external_ids={}, name=e88f80e1-ee63-4bdc-95c3-ad473efb7428, nb_cfg_timestamp=1769552836652, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.187 107302 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f8d908cb130>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.188 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.188 107302 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.188 107302 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.188 107302 INFO oslo_service.service [-] Starting 1 workers
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.192 107302 DEBUG oslo_service.service [-] Started child 107685 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.195 107685 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-233275'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.195 107302 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmptt0yfd35/privsep.sock']
Jan 27 22:28:04 compute-0 sudo[107684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-virlpdcomfgdymahcdkyhchxpcrkueoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552883.9232802-484-216453733705674/AnsiballZ_stat.py'
Jan 27 22:28:04 compute-0 sudo[107684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.215 107685 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.215 107685 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.215 107685 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.218 107685 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.224 107685 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.229 107685 INFO eventlet.wsgi.server [-] (107685) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 27 22:28:04 compute-0 python3.9[107688]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:28:04 compute-0 sudo[107684]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:04 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 27 22:28:04 compute-0 sudo[107815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtvwppvqzeqjcidwxjhkolggwtqyvxld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552883.9232802-484-216453733705674/AnsiballZ_copy.py'
Jan 27 22:28:04 compute-0 sudo[107815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.889 107302 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.889 107302 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmptt0yfd35/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.780 107797 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.786 107797 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.788 107797 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.788 107797 INFO oslo.privsep.daemon [-] privsep daemon running as pid 107797
Jan 27 22:28:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:04.892 107797 DEBUG oslo.privsep.daemon [-] privsep: reply[de707b0b-65a1-4bc7-9e10-783a6206212e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:28:05 compute-0 python3.9[107817]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769552883.9232802-484-216453733705674/.source.yaml _original_basename=.z9yv8a9s follow=False checksum=ab66ab8934f39ce8b98f56a90d4b20bf1ce7e75a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:05 compute-0 sudo[107815]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:05 compute-0 sshd-session[99083]: Connection closed by 192.168.122.30 port 42110
Jan 27 22:28:05 compute-0 sshd-session[99078]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:28:05 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Jan 27 22:28:05 compute-0 systemd[1]: session-21.scope: Consumed 35.004s CPU time.
Jan 27 22:28:05 compute-0 systemd-logind[789]: Session 21 logged out. Waiting for processes to exit.
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.377 107797 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.377 107797 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.378 107797 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:28:05 compute-0 systemd-logind[789]: Removed session 21.
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.896 107797 DEBUG oslo.privsep.daemon [-] privsep: reply[eeff314c-83e6-4339-ad02-812600e9a71a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.899 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, column=external_ids, values=({'neutron:ovn-metadata-id': '842a0989-beb5-5925-8d46-8f8ef5962889'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.907 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.914 107302 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.915 107302 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.915 107302 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.915 107302 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.915 107302 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.916 107302 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.916 107302 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.916 107302 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.917 107302 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.917 107302 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.917 107302 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.917 107302 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.918 107302 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.918 107302 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.918 107302 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.919 107302 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.919 107302 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.920 107302 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.920 107302 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.920 107302 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.920 107302 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.921 107302 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.921 107302 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.921 107302 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.921 107302 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.922 107302 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.922 107302 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.922 107302 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.923 107302 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.923 107302 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.923 107302 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.923 107302 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.924 107302 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.924 107302 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.924 107302 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.924 107302 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.925 107302 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.925 107302 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.926 107302 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.926 107302 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.926 107302 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.927 107302 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.927 107302 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.927 107302 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.927 107302 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.928 107302 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.928 107302 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.928 107302 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.928 107302 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.929 107302 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.929 107302 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.929 107302 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.929 107302 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.929 107302 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.930 107302 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.930 107302 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.930 107302 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.930 107302 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.931 107302 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.931 107302 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.931 107302 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.931 107302 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.932 107302 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.932 107302 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.932 107302 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.933 107302 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.933 107302 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.933 107302 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.933 107302 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.934 107302 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.934 107302 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.934 107302 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.934 107302 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.935 107302 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.935 107302 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.935 107302 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.935 107302 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.936 107302 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.936 107302 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.936 107302 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.936 107302 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.937 107302 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.937 107302 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.937 107302 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.937 107302 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.938 107302 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.938 107302 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.938 107302 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.938 107302 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.939 107302 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.939 107302 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.939 107302 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.939 107302 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.940 107302 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.940 107302 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.941 107302 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.941 107302 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.941 107302 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.941 107302 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.942 107302 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.942 107302 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.942 107302 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.942 107302 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.943 107302 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.943 107302 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.943 107302 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.943 107302 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.944 107302 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.944 107302 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.944 107302 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.944 107302 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.945 107302 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.945 107302 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.945 107302 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.945 107302 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.946 107302 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.946 107302 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.946 107302 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.946 107302 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.947 107302 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.947 107302 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.947 107302 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.947 107302 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.948 107302 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.948 107302 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.949 107302 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.949 107302 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.949 107302 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.949 107302 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.950 107302 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.950 107302 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.950 107302 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.951 107302 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.951 107302 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.951 107302 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.952 107302 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.952 107302 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.952 107302 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.953 107302 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.953 107302 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.953 107302 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.954 107302 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.954 107302 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.954 107302 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.954 107302 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.955 107302 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.955 107302 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.955 107302 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.955 107302 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.956 107302 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.956 107302 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.956 107302 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.956 107302 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.956 107302 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.957 107302 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.957 107302 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.957 107302 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.957 107302 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.957 107302 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.957 107302 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.957 107302 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.958 107302 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.958 107302 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.958 107302 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.958 107302 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.958 107302 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.959 107302 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.959 107302 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.959 107302 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.959 107302 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.959 107302 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.960 107302 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.960 107302 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.960 107302 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.960 107302 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.961 107302 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.961 107302 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.961 107302 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.961 107302 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.961 107302 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.961 107302 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.962 107302 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.962 107302 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.962 107302 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.962 107302 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.963 107302 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.963 107302 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.963 107302 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.963 107302 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.963 107302 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.963 107302 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.964 107302 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.964 107302 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.964 107302 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.964 107302 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.965 107302 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.965 107302 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.965 107302 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.965 107302 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.965 107302 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.965 107302 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.966 107302 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.966 107302 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.966 107302 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.966 107302 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.966 107302 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.967 107302 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.967 107302 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.967 107302 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.967 107302 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.967 107302 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.967 107302 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.967 107302 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.968 107302 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.968 107302 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.968 107302 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.968 107302 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.968 107302 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.968 107302 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.968 107302 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.968 107302 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.969 107302 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.969 107302 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.969 107302 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.969 107302 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.969 107302 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.969 107302 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.969 107302 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.970 107302 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.970 107302 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.970 107302 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.970 107302 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.971 107302 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.971 107302 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.971 107302 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.971 107302 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.971 107302 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.972 107302 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.972 107302 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.972 107302 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.972 107302 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.972 107302 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.973 107302 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.973 107302 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.973 107302 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.973 107302 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.973 107302 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.974 107302 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.974 107302 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.974 107302 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.974 107302 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.974 107302 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.975 107302 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.975 107302 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.975 107302 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.975 107302 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.975 107302 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.976 107302 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.976 107302 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.976 107302 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.976 107302 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.976 107302 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.976 107302 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.977 107302 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.977 107302 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.977 107302 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.977 107302 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.978 107302 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.978 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.978 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.978 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.978 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.979 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.979 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.979 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.979 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.979 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.980 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.980 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.980 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.980 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.980 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.981 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.981 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.981 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.981 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.981 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.982 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.982 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.982 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.982 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.982 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.983 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.983 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.983 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.983 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.983 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.983 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.983 107302 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.984 107302 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.984 107302 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.984 107302 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.984 107302 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:28:05 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:28:05.984 107302 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 22:28:09 compute-0 podman[107846]: 2026-01-27 22:28:09.377409922 +0000 UTC m=+0.084980290 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 22:28:10 compute-0 sshd-session[107873]: Accepted publickey for zuul from 192.168.122.30 port 33608 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:28:10 compute-0 systemd-logind[789]: New session 22 of user zuul.
Jan 27 22:28:10 compute-0 systemd[1]: Started Session 22 of User zuul.
Jan 27 22:28:10 compute-0 sshd-session[107873]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:28:11 compute-0 python3.9[108026]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:28:12 compute-0 sudo[108180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkqhhpgbfawqszuqegpvjrucvauqgobl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552891.8323674-29-4251888995212/AnsiballZ_command.py'
Jan 27 22:28:12 compute-0 sudo[108180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:12 compute-0 python3.9[108182]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:28:12 compute-0 sudo[108180]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:13 compute-0 sudo[108345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzjytjgsjsgxfdgbpsrszahhcacolfkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552892.8512125-40-142000157429051/AnsiballZ_systemd_service.py'
Jan 27 22:28:13 compute-0 sudo[108345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:13 compute-0 python3.9[108347]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:28:13 compute-0 systemd[1]: Reloading.
Jan 27 22:28:13 compute-0 systemd-sysv-generator[108374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:28:13 compute-0 systemd-rc-local-generator[108371]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:28:14 compute-0 sudo[108345]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:14 compute-0 python3.9[108532]: ansible-ansible.builtin.service_facts Invoked
Jan 27 22:28:14 compute-0 network[108549]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 22:28:14 compute-0 network[108550]: 'network-scripts' will be removed from distribution in near future.
Jan 27 22:28:14 compute-0 network[108551]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 22:28:19 compute-0 sudo[108810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clzjjrhuhqximevgpfnzcmnfryeoinpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552898.9891224-59-68109178275390/AnsiballZ_systemd_service.py'
Jan 27 22:28:19 compute-0 sudo[108810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:19 compute-0 python3.9[108812]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:28:20 compute-0 sudo[108810]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:21 compute-0 sudo[108963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhmyhmafyvizrcvwaryggwzmucuxzuzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552900.8181708-59-37088621993565/AnsiballZ_systemd_service.py'
Jan 27 22:28:21 compute-0 sudo[108963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:21 compute-0 python3.9[108965]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:28:21 compute-0 sudo[108963]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:21 compute-0 sudo[109116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezmtpbcqawduxhhgqtaudbrhalspqxod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552901.541478-59-120156782367170/AnsiballZ_systemd_service.py'
Jan 27 22:28:21 compute-0 sudo[109116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:22 compute-0 python3.9[109118]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:28:22 compute-0 sudo[109116]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:22 compute-0 sudo[109269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czqtrtilofufmcdmaxjkjwjfpdrcojvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552902.332302-59-80413331371342/AnsiballZ_systemd_service.py'
Jan 27 22:28:22 compute-0 sudo[109269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:22 compute-0 python3.9[109271]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:28:22 compute-0 sudo[109269]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:23 compute-0 sudo[109422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wklpzmtwuqwztddlirblljtxzfhobqow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552903.0853856-59-232307671017004/AnsiballZ_systemd_service.py'
Jan 27 22:28:23 compute-0 sudo[109422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:23 compute-0 python3.9[109424]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:28:24 compute-0 sudo[109422]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:25 compute-0 sudo[109575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtfdfcrvozffeanlrmhtfgetusgelvcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552904.7372704-59-239261877188978/AnsiballZ_systemd_service.py'
Jan 27 22:28:25 compute-0 sudo[109575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:25 compute-0 python3.9[109577]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:28:25 compute-0 sudo[109575]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:25 compute-0 sudo[109728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulplbomvspqwxhftevuhrnposlfxzwyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552905.5258968-59-275326474535173/AnsiballZ_systemd_service.py'
Jan 27 22:28:25 compute-0 sudo[109728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:26 compute-0 python3.9[109730]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:28:26 compute-0 sudo[109728]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:26 compute-0 sudo[109881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puwixmbuegjheubqmzucchwoeeekchgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552906.3905232-111-97927337966749/AnsiballZ_file.py'
Jan 27 22:28:26 compute-0 sudo[109881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:27 compute-0 python3.9[109883]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:27 compute-0 sudo[109881]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:27 compute-0 sudo[110033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlujmsezvxslbcjtrubknhnsjhwazdtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552907.2035847-111-59442884071248/AnsiballZ_file.py'
Jan 27 22:28:27 compute-0 sudo[110033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:27 compute-0 python3.9[110035]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:27 compute-0 sudo[110033]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:28 compute-0 sudo[110185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuboifkjsrucwgwcmizxxpanuuusxegi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552907.9863248-111-11206631004181/AnsiballZ_file.py'
Jan 27 22:28:28 compute-0 sudo[110185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:28 compute-0 python3.9[110187]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:28 compute-0 sudo[110185]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:29 compute-0 sudo[110337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytlzumqwfrqehjgkktqqkcbctvoglaen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552908.692829-111-117751528186565/AnsiballZ_file.py'
Jan 27 22:28:29 compute-0 sudo[110337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:29 compute-0 python3.9[110339]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:29 compute-0 sudo[110337]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:29 compute-0 sudo[110489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nogtfqnudqmhfammpivhzraivqtetvet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552909.4651744-111-17023681440994/AnsiballZ_file.py'
Jan 27 22:28:29 compute-0 sudo[110489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:29 compute-0 python3.9[110491]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:29 compute-0 sudo[110489]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:30 compute-0 sudo[110641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcoqyvqhhznbgsvhcawwufbjcnzicqsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552910.089809-111-75515720783539/AnsiballZ_file.py'
Jan 27 22:28:30 compute-0 sudo[110641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:30 compute-0 python3.9[110643]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:30 compute-0 sudo[110641]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:30 compute-0 sudo[110793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmbpgtjgtwjygwbtsoisykhsaijqngqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552910.679491-111-202856102230754/AnsiballZ_file.py'
Jan 27 22:28:30 compute-0 sudo[110793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:31 compute-0 python3.9[110795]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:31 compute-0 sudo[110793]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:31 compute-0 sudo[110945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbkjgtjricgruuwbehdfwxzqhplludyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552911.389462-161-226004998430871/AnsiballZ_file.py'
Jan 27 22:28:31 compute-0 sudo[110945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:31 compute-0 python3.9[110947]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:31 compute-0 sudo[110945]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:32 compute-0 sudo[111097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iidjdknnjasexxprntjxewzpkwgodrsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552912.0731292-161-149626642352164/AnsiballZ_file.py'
Jan 27 22:28:32 compute-0 sudo[111097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:32 compute-0 python3.9[111099]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:32 compute-0 sudo[111097]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:32 compute-0 sudo[111259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzcgqmhfohsjhmqspvhinypriqfydvsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552912.6436505-161-98006213364997/AnsiballZ_file.py'
Jan 27 22:28:32 compute-0 sudo[111259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:32 compute-0 podman[111223]: 2026-01-27 22:28:32.9435144 +0000 UTC m=+0.060522803 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 22:28:33 compute-0 python3.9[111262]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:33 compute-0 sudo[111259]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:33 compute-0 sudo[111418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnhlkpsuakayuqsbmrrkfasuaddjtwan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552913.2949204-161-58372438234662/AnsiballZ_file.py'
Jan 27 22:28:33 compute-0 sudo[111418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:33 compute-0 python3.9[111420]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:33 compute-0 sudo[111418]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:34 compute-0 sudo[111570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikxatdaocgcyyoszmfhdtlljilhsgalz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552913.9611478-161-269881297428537/AnsiballZ_file.py'
Jan 27 22:28:34 compute-0 sudo[111570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:34 compute-0 python3.9[111572]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:34 compute-0 sudo[111570]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:35 compute-0 sudo[111722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgdbopsasnoqpukhfylzgoczgupreqhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552914.693704-161-48657252470707/AnsiballZ_file.py'
Jan 27 22:28:35 compute-0 sudo[111722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:35 compute-0 python3.9[111724]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:35 compute-0 sudo[111722]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:35 compute-0 sudo[111874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmerfxjbkhrsktpglnkmlvunoouwzcet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552915.3720415-161-50398525047180/AnsiballZ_file.py'
Jan 27 22:28:35 compute-0 sudo[111874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:35 compute-0 python3.9[111876]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:28:35 compute-0 sudo[111874]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:36 compute-0 sudo[112026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcvfkyarbajfxytyvxkigengnhhmhvdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552916.0512037-212-252515888107713/AnsiballZ_command.py'
Jan 27 22:28:36 compute-0 sudo[112026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:36 compute-0 python3.9[112028]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:28:36 compute-0 sudo[112026]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:37 compute-0 python3.9[112180]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 22:28:37 compute-0 sudo[112330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pljyimjpgrlawflxaaesjztkphxlenmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552917.3720243-230-123558605611035/AnsiballZ_systemd_service.py'
Jan 27 22:28:37 compute-0 sudo[112330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:37 compute-0 python3.9[112332]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:28:37 compute-0 systemd[1]: Reloading.
Jan 27 22:28:38 compute-0 systemd-sysv-generator[112361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:28:38 compute-0 systemd-rc-local-generator[112358]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:28:38 compute-0 sudo[112330]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:38 compute-0 sudo[112517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqpdauuriqanygommmjgpqydgsjlkgtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552918.3196857-238-253690636170492/AnsiballZ_command.py'
Jan 27 22:28:38 compute-0 sudo[112517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:38 compute-0 python3.9[112519]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:28:38 compute-0 sudo[112517]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:39 compute-0 sudo[112670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyxwsissmhjhxqkmuzvkbftnraqpjpjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552918.9740977-238-120391075793069/AnsiballZ_command.py'
Jan 27 22:28:39 compute-0 sudo[112670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:39 compute-0 python3.9[112672]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:28:39 compute-0 sudo[112670]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:39 compute-0 podman[112674]: 2026-01-27 22:28:39.606835828 +0000 UTC m=+0.111623810 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 22:28:39 compute-0 sudo[112850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcbtseletfpfdlhlwotcslmwyfyzrzij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552919.644563-238-78388546620179/AnsiballZ_command.py'
Jan 27 22:28:39 compute-0 sudo[112850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:40 compute-0 python3.9[112852]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:28:40 compute-0 sudo[112850]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:40 compute-0 sudo[113003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrrgrlrbwfmbmtabdpaqjrjsyzgqqijo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552920.3163464-238-151052073591137/AnsiballZ_command.py'
Jan 27 22:28:40 compute-0 sudo[113003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:40 compute-0 python3.9[113005]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:28:40 compute-0 sudo[113003]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:41 compute-0 sudo[113156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snnxepmxizmgmmlfnzaxgblhfydbwkiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552921.1102493-238-137273863058763/AnsiballZ_command.py'
Jan 27 22:28:41 compute-0 sudo[113156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:41 compute-0 python3.9[113158]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:28:41 compute-0 sudo[113156]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:42 compute-0 sudo[113309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryumdbbaqmcgzljmgygiawmjvgytvtpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552921.7622674-238-146490154843298/AnsiballZ_command.py'
Jan 27 22:28:42 compute-0 sudo[113309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:42 compute-0 python3.9[113311]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:28:42 compute-0 sudo[113309]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:42 compute-0 sudo[113462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckzeerafmxknqatxzcosyhononkxgtqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552922.4634619-238-204467293091753/AnsiballZ_command.py'
Jan 27 22:28:42 compute-0 sudo[113462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:42 compute-0 python3.9[113464]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:28:42 compute-0 sudo[113462]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:43 compute-0 sudo[113615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auzokfgpbklurnjkmkomzaswlezjgpxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552923.3326514-292-162618962265538/AnsiballZ_getent.py'
Jan 27 22:28:43 compute-0 sudo[113615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:43 compute-0 python3.9[113617]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 27 22:28:43 compute-0 sudo[113615]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:44 compute-0 sudo[113768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmvxcrxtypoiphjwvcupadgmhtwjovhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552924.2012901-300-43861828669765/AnsiballZ_group.py'
Jan 27 22:28:44 compute-0 sudo[113768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:44 compute-0 python3.9[113770]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 22:28:44 compute-0 groupadd[113771]: group added to /etc/group: name=libvirt, GID=42473
Jan 27 22:28:44 compute-0 groupadd[113771]: group added to /etc/gshadow: name=libvirt
Jan 27 22:28:44 compute-0 groupadd[113771]: new group: name=libvirt, GID=42473
Jan 27 22:28:44 compute-0 sudo[113768]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:45 compute-0 sudo[113926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpoibpwjncxyeypfxpiigpddnqozxlcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552925.1685503-308-40346073214565/AnsiballZ_user.py'
Jan 27 22:28:45 compute-0 sudo[113926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:45 compute-0 python3.9[113928]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 22:28:45 compute-0 useradd[113930]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 22:28:45 compute-0 sudo[113926]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:46 compute-0 sudo[114086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokjvcgvenbujrhmssrzqcikgvyrzmfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552926.2994983-319-216649672730552/AnsiballZ_setup.py'
Jan 27 22:28:46 compute-0 sudo[114086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:46 compute-0 python3.9[114088]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:28:47 compute-0 sudo[114086]: pam_unix(sudo:session): session closed for user root
Jan 27 22:28:47 compute-0 sudo[114170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbmxpcahfdlzgsthqqtgquanwrzclzmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769552926.2994983-319-216649672730552/AnsiballZ_dnf.py'
Jan 27 22:28:47 compute-0 sudo[114170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:28:47 compute-0 python3.9[114172]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:29:03 compute-0 podman[114360]: 2026-01-27 22:29:03.383539477 +0000 UTC m=+0.051440709 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 27 22:29:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:29:04.114 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:29:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:29:04.115 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:29:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:29:04.115 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:29:10 compute-0 podman[114386]: 2026-01-27 22:29:10.472018248 +0000 UTC m=+0.163379296 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:29:14 compute-0 kernel: SELinux:  Converting 2764 SID table entries...
Jan 27 22:29:14 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 22:29:14 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 22:29:14 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 22:29:14 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 22:29:14 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 22:29:14 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 22:29:14 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 22:29:23 compute-0 kernel: SELinux:  Converting 2764 SID table entries...
Jan 27 22:29:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 22:29:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 22:29:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 22:29:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 22:29:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 22:29:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 22:29:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 22:29:34 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 27 22:29:34 compute-0 podman[114431]: 2026-01-27 22:29:34.408483193 +0000 UTC m=+0.082031363 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 22:29:41 compute-0 podman[118423]: 2026-01-27 22:29:41.386463108 +0000 UTC m=+0.090639102 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 22:30:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:30:04.116 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:30:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:30:04.116 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:30:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:30:04.117 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:30:05 compute-0 podman[131332]: 2026-01-27 22:30:05.355299007 +0000 UTC m=+0.058072047 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 22:30:12 compute-0 podman[131372]: 2026-01-27 22:30:12.404591536 +0000 UTC m=+0.104722602 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 22:30:16 compute-0 kernel: SELinux:  Converting 2765 SID table entries...
Jan 27 22:30:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 27 22:30:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 27 22:30:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 27 22:30:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 27 22:30:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 27 22:30:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 27 22:30:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 27 22:30:17 compute-0 groupadd[131409]: group added to /etc/group: name=dnsmasq, GID=993
Jan 27 22:30:17 compute-0 groupadd[131409]: group added to /etc/gshadow: name=dnsmasq
Jan 27 22:30:17 compute-0 groupadd[131409]: new group: name=dnsmasq, GID=993
Jan 27 22:30:17 compute-0 useradd[131416]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 27 22:30:17 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 27 22:30:17 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 27 22:30:17 compute-0 dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Jan 27 22:30:17 compute-0 groupadd[131429]: group added to /etc/group: name=clevis, GID=992
Jan 27 22:30:17 compute-0 groupadd[131429]: group added to /etc/gshadow: name=clevis
Jan 27 22:30:18 compute-0 groupadd[131429]: new group: name=clevis, GID=992
Jan 27 22:30:18 compute-0 useradd[131436]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 27 22:30:18 compute-0 usermod[131446]: add 'clevis' to group 'tss'
Jan 27 22:30:18 compute-0 usermod[131446]: add 'clevis' to shadow group 'tss'
Jan 27 22:30:20 compute-0 polkitd[44023]: Reloading rules
Jan 27 22:30:20 compute-0 polkitd[44023]: Collecting garbage unconditionally...
Jan 27 22:30:20 compute-0 polkitd[44023]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 22:30:20 compute-0 polkitd[44023]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 22:30:20 compute-0 polkitd[44023]: Finished loading, compiling and executing 3 rules
Jan 27 22:30:20 compute-0 polkitd[44023]: Reloading rules
Jan 27 22:30:20 compute-0 polkitd[44023]: Collecting garbage unconditionally...
Jan 27 22:30:20 compute-0 polkitd[44023]: Loading rules from directory /etc/polkit-1/rules.d
Jan 27 22:30:20 compute-0 polkitd[44023]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 27 22:30:20 compute-0 polkitd[44023]: Finished loading, compiling and executing 3 rules
Jan 27 22:30:21 compute-0 groupadd[131636]: group added to /etc/group: name=ceph, GID=167
Jan 27 22:30:21 compute-0 groupadd[131636]: group added to /etc/gshadow: name=ceph
Jan 27 22:30:21 compute-0 groupadd[131636]: new group: name=ceph, GID=167
Jan 27 22:30:21 compute-0 useradd[131642]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 27 22:30:24 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 27 22:30:24 compute-0 sshd[1004]: Received signal 15; terminating.
Jan 27 22:30:24 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 27 22:30:24 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 27 22:30:24 compute-0 systemd[1]: sshd.service: Consumed 2.416s CPU time, read 32.0K from disk, written 20.0K to disk.
Jan 27 22:30:24 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 27 22:30:24 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 27 22:30:24 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 22:30:24 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 22:30:24 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 27 22:30:24 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 27 22:30:24 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 27 22:30:24 compute-0 sshd[132161]: Server listening on 0.0.0.0 port 22.
Jan 27 22:30:24 compute-0 sshd[132161]: Server listening on :: port 22.
Jan 27 22:30:24 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 27 22:30:26 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 22:30:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 22:30:26 compute-0 systemd[1]: Reloading.
Jan 27 22:30:26 compute-0 systemd-rc-local-generator[132415]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:30:26 compute-0 systemd-sysv-generator[132421]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:30:26 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 22:30:29 compute-0 sudo[114170]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:30 compute-0 sudo[136104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjrycwzercktumgrfipamftxaavkmwbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553029.352655-331-47542407288903/AnsiballZ_systemd.py'
Jan 27 22:30:30 compute-0 sudo[136104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:30 compute-0 python3.9[136125]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 22:30:30 compute-0 systemd[1]: Reloading.
Jan 27 22:30:30 compute-0 systemd-sysv-generator[136518]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:30:30 compute-0 systemd-rc-local-generator[136513]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:30:30 compute-0 sudo[136104]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:31 compute-0 sudo[137250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afvlxrkbzckpyiqbouahwotacanywlrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553030.9293745-331-25568844220417/AnsiballZ_systemd.py'
Jan 27 22:30:31 compute-0 sudo[137250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:31 compute-0 python3.9[137271]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 22:30:31 compute-0 systemd[1]: Reloading.
Jan 27 22:30:31 compute-0 systemd-rc-local-generator[137652]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:30:31 compute-0 systemd-sysv-generator[137656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:30:32 compute-0 sudo[137250]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:32 compute-0 sudo[138331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkydhskcngygczojccgxdbiijzmtbpoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553032.139147-331-150631399946030/AnsiballZ_systemd.py'
Jan 27 22:30:32 compute-0 sudo[138331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:32 compute-0 python3.9[138352]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 22:30:32 compute-0 systemd[1]: Reloading.
Jan 27 22:30:32 compute-0 systemd-rc-local-generator[138728]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:30:32 compute-0 systemd-sysv-generator[138732]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:30:33 compute-0 sudo[138331]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:33 compute-0 sudo[139486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjcomddpqpfjqztblopgbgihhqckfby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553033.1898263-331-136464411480712/AnsiballZ_systemd.py'
Jan 27 22:30:33 compute-0 sudo[139486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:33 compute-0 python3.9[139511]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 22:30:33 compute-0 systemd[1]: Reloading.
Jan 27 22:30:33 compute-0 systemd-rc-local-generator[140037]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:30:33 compute-0 systemd-sysv-generator[140041]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:30:34 compute-0 sudo[139486]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:34 compute-0 sudo[140839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exjyltmtfyvghqqlyitcjhvbfcassbao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553034.3118382-360-259312479165898/AnsiballZ_systemd.py'
Jan 27 22:30:34 compute-0 sudo[140839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:34 compute-0 python3.9[140856]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 22:30:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 22:30:35 compute-0 systemd[1]: man-db-cache-update.service: Consumed 11.944s CPU time.
Jan 27 22:30:35 compute-0 systemd[1]: run-r3b475483317e41b983b5b1a6986928ee.service: Deactivated successfully.
Jan 27 22:30:35 compute-0 podman[141709]: 2026-01-27 22:30:35.8055199 +0000 UTC m=+0.061207308 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 22:30:35 compute-0 systemd[1]: Reloading.
Jan 27 22:30:36 compute-0 systemd-rc-local-generator[141756]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:30:36 compute-0 systemd-sysv-generator[141762]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:30:36 compute-0 sudo[140839]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:36 compute-0 sudo[141915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoeogdzkmicjlmwwhduzhxkpsbergatv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553036.3427863-360-118375488277585/AnsiballZ_systemd.py'
Jan 27 22:30:36 compute-0 sudo[141915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:36 compute-0 python3.9[141917]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:36 compute-0 systemd[1]: Reloading.
Jan 27 22:30:37 compute-0 systemd-rc-local-generator[141946]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:30:37 compute-0 systemd-sysv-generator[141951]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:30:37 compute-0 sudo[141915]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:37 compute-0 sudo[142106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yezwuxeljrzyrajirmtbvycilyufstwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553037.354784-360-197674214742043/AnsiballZ_systemd.py'
Jan 27 22:30:37 compute-0 sudo[142106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:37 compute-0 python3.9[142108]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:37 compute-0 systemd[1]: Reloading.
Jan 27 22:30:38 compute-0 systemd-rc-local-generator[142141]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:30:38 compute-0 systemd-sysv-generator[142144]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:30:38 compute-0 sudo[142106]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:38 compute-0 sudo[142297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dezupgewslxkzblsqoexuslknybysdss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553038.2963686-360-12563319196144/AnsiballZ_systemd.py'
Jan 27 22:30:38 compute-0 sudo[142297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:38 compute-0 python3.9[142299]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:38 compute-0 sudo[142297]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:39 compute-0 sudo[142452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhdviddgkbifysylzydcgwvzposttyfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553038.998467-360-212633448117983/AnsiballZ_systemd.py'
Jan 27 22:30:39 compute-0 sudo[142452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:39 compute-0 python3.9[142454]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:39 compute-0 systemd[1]: Reloading.
Jan 27 22:30:39 compute-0 systemd-sysv-generator[142484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:30:39 compute-0 systemd-rc-local-generator[142480]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:30:39 compute-0 sudo[142452]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:40 compute-0 sudo[142641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycctrnjerurjchfgbxnufhfpsiqeetrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553039.9851444-396-77577953398531/AnsiballZ_systemd.py'
Jan 27 22:30:40 compute-0 sudo[142641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:40 compute-0 python3.9[142643]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 27 22:30:40 compute-0 systemd[1]: Reloading.
Jan 27 22:30:40 compute-0 systemd-sysv-generator[142678]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:30:40 compute-0 systemd-rc-local-generator[142674]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:30:41 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 27 22:30:41 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 27 22:30:41 compute-0 sudo[142641]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:41 compute-0 sudo[142835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyryvmymrxhqctvygcfiqjhslrdikayc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553041.354688-404-62327351767377/AnsiballZ_systemd.py'
Jan 27 22:30:41 compute-0 sudo[142835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:41 compute-0 python3.9[142837]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:41 compute-0 sudo[142835]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:42 compute-0 sudo[142990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spfhmwydggxkpwzcrkqipniewmycidoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553042.1144876-404-250867808893903/AnsiballZ_systemd.py'
Jan 27 22:30:42 compute-0 sudo[142990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:42 compute-0 python3.9[142992]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:42 compute-0 sudo[142990]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:42 compute-0 podman[142994]: 2026-01-27 22:30:42.874518664 +0000 UTC m=+0.126028090 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, container_name=ovn_controller)
Jan 27 22:30:43 compute-0 sudo[143171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llwvdzagpjooxcmkhkdxvdciwtlpqcab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553042.93561-404-125920501370844/AnsiballZ_systemd.py'
Jan 27 22:30:43 compute-0 sudo[143171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:43 compute-0 python3.9[143173]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:43 compute-0 sudo[143171]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:43 compute-0 sudo[143326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uunmuwihviptorwsjgxtfgjnqanfuock ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553043.6873841-404-142122637433492/AnsiballZ_systemd.py'
Jan 27 22:30:43 compute-0 sudo[143326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:44 compute-0 python3.9[143328]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:44 compute-0 sudo[143326]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:44 compute-0 sudo[143481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khocmnortdzdhietmwbksjnuvahbqobr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553044.526527-404-90565552229199/AnsiballZ_systemd.py'
Jan 27 22:30:44 compute-0 sudo[143481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:45 compute-0 python3.9[143483]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:45 compute-0 sudo[143481]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:45 compute-0 sudo[143636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzjnhoijluyrhfqbltqhscpvuypgphaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553045.313085-404-103315390494365/AnsiballZ_systemd.py'
Jan 27 22:30:45 compute-0 sudo[143636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:45 compute-0 python3.9[143638]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:45 compute-0 sudo[143636]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:46 compute-0 sudo[143791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfivhmislcrcxkyagghiecuspywtokjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553046.086682-404-72928362752054/AnsiballZ_systemd.py'
Jan 27 22:30:46 compute-0 sudo[143791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:46 compute-0 python3.9[143793]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:47 compute-0 sudo[143791]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:48 compute-0 sudo[143946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfepcfsoixdrweobuppthpcyuhxfhkyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553047.7869527-404-46819810724037/AnsiballZ_systemd.py'
Jan 27 22:30:48 compute-0 sudo[143946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:48 compute-0 python3.9[143948]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:48 compute-0 sudo[143946]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:48 compute-0 sudo[144101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwdsoljcrhhfukdkydwkxbhxkdheagig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553048.5939384-404-87728710022760/AnsiballZ_systemd.py'
Jan 27 22:30:48 compute-0 sudo[144101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:49 compute-0 python3.9[144103]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:49 compute-0 sudo[144101]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:49 compute-0 sudo[144256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bruajoeblanvfgadfifctfnipbhbnghr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553049.3582191-404-258996381950253/AnsiballZ_systemd.py'
Jan 27 22:30:49 compute-0 sudo[144256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:49 compute-0 python3.9[144258]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:50 compute-0 sudo[144256]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:50 compute-0 sudo[144411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlzktlwubnvczlqlneputvozoozvruon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553050.1421492-404-117376497680635/AnsiballZ_systemd.py'
Jan 27 22:30:50 compute-0 sudo[144411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:50 compute-0 python3.9[144413]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:50 compute-0 sudo[144411]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:51 compute-0 sudo[144566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dskbhhkmdowffliehhythdtgttdztryr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553050.929806-404-49755067019652/AnsiballZ_systemd.py'
Jan 27 22:30:51 compute-0 sudo[144566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:51 compute-0 python3.9[144568]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:51 compute-0 sudo[144566]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:51 compute-0 sudo[144721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijbpcucyqbpnreifhldtslukqlrqyznc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553051.6887743-404-34318124118552/AnsiballZ_systemd.py'
Jan 27 22:30:51 compute-0 sudo[144721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:52 compute-0 python3.9[144723]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:52 compute-0 sudo[144721]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:52 compute-0 sudo[144876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqnjdqcjtiztefiehluyivyylhvnylav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553052.6073132-404-108153405696167/AnsiballZ_systemd.py'
Jan 27 22:30:52 compute-0 sudo[144876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:53 compute-0 python3.9[144878]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 27 22:30:53 compute-0 sudo[144876]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:53 compute-0 sudo[145031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glcsgtvgowhcibyuykwlpxjeefvjtzse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553053.5685015-506-4847496207044/AnsiballZ_file.py'
Jan 27 22:30:53 compute-0 sudo[145031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:54 compute-0 python3.9[145033]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:30:54 compute-0 sudo[145031]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:54 compute-0 sudo[145183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzbacjsjbtrmmtrlxcunhcbkanrkckeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553054.1536992-506-123593631466119/AnsiballZ_file.py'
Jan 27 22:30:54 compute-0 sudo[145183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:54 compute-0 python3.9[145185]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:30:54 compute-0 sudo[145183]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:54 compute-0 sudo[145335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywofqdhicmkasubnffscgsumcfdwutxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553054.7350447-506-5510397174527/AnsiballZ_file.py'
Jan 27 22:30:54 compute-0 sudo[145335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:55 compute-0 python3.9[145337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:30:55 compute-0 sudo[145335]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:55 compute-0 sudo[145487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehusvvfpzfxamobutduvazbubzhqaxrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553055.3015559-506-224628802175482/AnsiballZ_file.py'
Jan 27 22:30:55 compute-0 sudo[145487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:55 compute-0 python3.9[145489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:30:55 compute-0 sudo[145487]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:56 compute-0 sudo[145639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdzfqmfgkxikrygodocwgtnateosahyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553055.97021-506-127385473302349/AnsiballZ_file.py'
Jan 27 22:30:56 compute-0 sudo[145639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:56 compute-0 python3.9[145641]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:30:56 compute-0 sudo[145639]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:56 compute-0 sudo[145791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtfpfyrviwxrkfuovlzziesekkhanmlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553056.559749-506-263856723044871/AnsiballZ_file.py'
Jan 27 22:30:56 compute-0 sudo[145791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:56 compute-0 python3.9[145793]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:30:56 compute-0 sudo[145791]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:57 compute-0 python3.9[145943]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:30:58 compute-0 sudo[146093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llakxexbzhiipzfqzziuwjyocaziwmzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553057.949218-557-166571737164802/AnsiballZ_stat.py'
Jan 27 22:30:58 compute-0 sudo[146093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:58 compute-0 python3.9[146095]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:30:58 compute-0 sudo[146093]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:59 compute-0 sudo[146218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtrorpjlsgzivpvuhmzoqqifbfjfqpft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553057.949218-557-166571737164802/AnsiballZ_copy.py'
Jan 27 22:30:59 compute-0 sudo[146218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:59 compute-0 python3.9[146220]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769553057.949218-557-166571737164802/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:30:59 compute-0 sudo[146218]: pam_unix(sudo:session): session closed for user root
Jan 27 22:30:59 compute-0 sudo[146370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suvmrbcqvpcdqogykhcvxykklwphjzvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553059.412755-557-693084864072/AnsiballZ_stat.py'
Jan 27 22:30:59 compute-0 sudo[146370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:30:59 compute-0 python3.9[146372]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:30:59 compute-0 sudo[146370]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:00 compute-0 sudo[146495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrekhasyzencevkjjlmrkxihsuncgxmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553059.412755-557-693084864072/AnsiballZ_copy.py'
Jan 27 22:31:00 compute-0 sudo[146495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:00 compute-0 python3.9[146497]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769553059.412755-557-693084864072/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:00 compute-0 sudo[146495]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:00 compute-0 sudo[146647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-petxyhthgpzekerpiincqlcounevgxkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553060.51977-557-181027058758477/AnsiballZ_stat.py'
Jan 27 22:31:00 compute-0 sudo[146647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:00 compute-0 python3.9[146649]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:01 compute-0 sudo[146647]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:01 compute-0 sudo[146772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbgfldrbmrxshpeqjdaoqozjstjobpno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553060.51977-557-181027058758477/AnsiballZ_copy.py'
Jan 27 22:31:01 compute-0 sudo[146772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:01 compute-0 python3.9[146774]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769553060.51977-557-181027058758477/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:01 compute-0 sudo[146772]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:01 compute-0 sudo[146924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpfgoqbznjncxmotktcqsmbumhdiqjag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553061.7197752-557-17374327778391/AnsiballZ_stat.py'
Jan 27 22:31:01 compute-0 sudo[146924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:02 compute-0 python3.9[146926]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:02 compute-0 sudo[146924]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:02 compute-0 sudo[147049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anhomcciktkphpagxijoigrguavotqlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553061.7197752-557-17374327778391/AnsiballZ_copy.py'
Jan 27 22:31:02 compute-0 sudo[147049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:02 compute-0 python3.9[147051]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769553061.7197752-557-17374327778391/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:02 compute-0 sudo[147049]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:03 compute-0 sudo[147201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjlwawsktwjiensjmrggkaxozrtwwfho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553062.7844436-557-112861054495203/AnsiballZ_stat.py'
Jan 27 22:31:03 compute-0 sudo[147201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:03 compute-0 python3.9[147203]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:03 compute-0 sudo[147201]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:03 compute-0 sudo[147326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snxsljbvivobkilnrxrrldoznspfcppm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553062.7844436-557-112861054495203/AnsiballZ_copy.py'
Jan 27 22:31:03 compute-0 sudo[147326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:03 compute-0 python3.9[147328]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769553062.7844436-557-112861054495203/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:03 compute-0 sudo[147326]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:31:04.117 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:31:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:31:04.117 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:31:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:31:04.117 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:31:04 compute-0 sudo[147478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnavgehaorerlmdcqgekdmlaimcwbydh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553063.9751744-557-64141216467746/AnsiballZ_stat.py'
Jan 27 22:31:04 compute-0 sudo[147478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:04 compute-0 python3.9[147480]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:04 compute-0 sudo[147478]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:04 compute-0 sudo[147603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzfhqrrihxlvrrsmbqbassdqcljkdvmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553063.9751744-557-64141216467746/AnsiballZ_copy.py'
Jan 27 22:31:04 compute-0 sudo[147603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:04 compute-0 python3.9[147605]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769553063.9751744-557-64141216467746/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:05 compute-0 sudo[147603]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:05 compute-0 sudo[147755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqxruzwejnureeoevjoqwhvltumjdluj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553065.1115422-557-79333504245893/AnsiballZ_stat.py'
Jan 27 22:31:05 compute-0 sudo[147755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:05 compute-0 python3.9[147757]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:05 compute-0 sudo[147755]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:05 compute-0 sudo[147893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiuaevdrbflfybhpkfodqmowzawuohbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553065.1115422-557-79333504245893/AnsiballZ_copy.py'
Jan 27 22:31:05 compute-0 sudo[147893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:05 compute-0 podman[147852]: 2026-01-27 22:31:05.922555212 +0000 UTC m=+0.055653036 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 22:31:06 compute-0 python3.9[147899]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769553065.1115422-557-79333504245893/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:06 compute-0 sudo[147893]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:06 compute-0 sudo[148049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bspwcniyrgwjzgtrtbuckoctqhiuvlsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553066.2756333-557-41870819358368/AnsiballZ_stat.py'
Jan 27 22:31:06 compute-0 sudo[148049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:06 compute-0 python3.9[148051]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:06 compute-0 sudo[148049]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:07 compute-0 sudo[148174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxegiqtudovuhibyxfnhelfhozcfnjpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553066.2756333-557-41870819358368/AnsiballZ_copy.py'
Jan 27 22:31:07 compute-0 sudo[148174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:07 compute-0 python3.9[148176]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769553066.2756333-557-41870819358368/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:07 compute-0 sudo[148174]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:07 compute-0 sudo[148326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njnmjzzepubirjfhzeygxoeieruwuhmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553067.5850353-670-135801952959886/AnsiballZ_command.py'
Jan 27 22:31:07 compute-0 sudo[148326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:08 compute-0 python3.9[148328]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 27 22:31:08 compute-0 sudo[148326]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:08 compute-0 sudo[148479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcyegvifoforkonlvpwplrnwsmnepssz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553068.4363184-679-10576218337526/AnsiballZ_file.py'
Jan 27 22:31:08 compute-0 sudo[148479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:08 compute-0 python3.9[148481]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:08 compute-0 sudo[148479]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:09 compute-0 sudo[148631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnwnxipntzdkxrfkdgcfsebsixugrryu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553069.0823245-679-105822380705782/AnsiballZ_file.py'
Jan 27 22:31:09 compute-0 sudo[148631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:09 compute-0 python3.9[148633]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:09 compute-0 sudo[148631]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:10 compute-0 sudo[148783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkolbcprxezablidftxonrczszpygshj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553069.7561452-679-95514678736068/AnsiballZ_file.py'
Jan 27 22:31:10 compute-0 sudo[148783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:10 compute-0 python3.9[148785]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:10 compute-0 sudo[148783]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:10 compute-0 sudo[148935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqmhjmphvkoheueurnvrqhlvlwrwpawt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553070.3801806-679-163500807383709/AnsiballZ_file.py'
Jan 27 22:31:10 compute-0 sudo[148935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:10 compute-0 python3.9[148937]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:10 compute-0 sudo[148935]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:11 compute-0 sudo[149087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uewvseilijcqhwrxplhlblsbzpbrztkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553070.963567-679-163758510182992/AnsiballZ_file.py'
Jan 27 22:31:11 compute-0 sudo[149087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:11 compute-0 python3.9[149089]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:11 compute-0 sudo[149087]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:11 compute-0 sudo[149239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kijtdtdilwavjwdduvubrhnwxhsypfnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553071.6123667-679-76409554284174/AnsiballZ_file.py'
Jan 27 22:31:11 compute-0 sudo[149239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:12 compute-0 python3.9[149241]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:12 compute-0 sudo[149239]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:12 compute-0 sudo[149391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsdvouuyoqcfguiixacaeklsnyruqyik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553072.1950164-679-58799546191628/AnsiballZ_file.py'
Jan 27 22:31:12 compute-0 sudo[149391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:12 compute-0 python3.9[149393]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:12 compute-0 sudo[149391]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:12 compute-0 sudo[149552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzccbyrsxvyupulvabbbrqifccnjqpus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553072.7511353-679-218389264284533/AnsiballZ_file.py'
Jan 27 22:31:12 compute-0 sudo[149552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:13 compute-0 podman[149517]: 2026-01-27 22:31:13.023031469 +0000 UTC m=+0.075168747 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:31:13 compute-0 python3.9[149561]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:13 compute-0 sudo[149552]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:13 compute-0 sudo[149722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miwkxtfygxfvcijcxhklkbvmzwwcxhyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553073.302377-679-122832958707160/AnsiballZ_file.py'
Jan 27 22:31:13 compute-0 sudo[149722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:13 compute-0 python3.9[149724]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:13 compute-0 sudo[149722]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:14 compute-0 sudo[149874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvihnyipqnmoquvndypfriulabwopkzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553073.9501143-679-126046643623987/AnsiballZ_file.py'
Jan 27 22:31:14 compute-0 sudo[149874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:14 compute-0 python3.9[149876]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:14 compute-0 sudo[149874]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:14 compute-0 sudo[150026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeljwfilihgknazaazjjrbkkrwfrnckl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553074.4871116-679-148257154600371/AnsiballZ_file.py'
Jan 27 22:31:14 compute-0 sudo[150026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:14 compute-0 python3.9[150028]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:14 compute-0 sudo[150026]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:15 compute-0 sudo[150178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsudsbbafaihnpbgxgsxfcacrceeulcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553075.0851068-679-159115541136273/AnsiballZ_file.py'
Jan 27 22:31:15 compute-0 sudo[150178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:15 compute-0 python3.9[150180]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:15 compute-0 sudo[150178]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:16 compute-0 sudo[150330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnmojnsjtkzbykfhofefphkizzjfjdzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553075.7560189-679-42450995445822/AnsiballZ_file.py'
Jan 27 22:31:16 compute-0 sudo[150330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:16 compute-0 python3.9[150332]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:16 compute-0 sudo[150330]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:16 compute-0 sudo[150482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzxjcaamhhpypoftzvcohybdejfmvfit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553076.405041-679-252456381670268/AnsiballZ_file.py'
Jan 27 22:31:16 compute-0 sudo[150482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:16 compute-0 python3.9[150484]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:16 compute-0 sudo[150482]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:17 compute-0 sudo[150634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwhytnfuhtmnmulmrpvphcfsgphvgbco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553077.0690265-778-38583761708908/AnsiballZ_stat.py'
Jan 27 22:31:17 compute-0 sudo[150634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:17 compute-0 python3.9[150636]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:17 compute-0 sudo[150634]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:17 compute-0 sudo[150757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unahvszszuyzaqbduosupgfyjwwxqsom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553077.0690265-778-38583761708908/AnsiballZ_copy.py'
Jan 27 22:31:17 compute-0 sudo[150757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:18 compute-0 python3.9[150759]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553077.0690265-778-38583761708908/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:18 compute-0 sudo[150757]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:18 compute-0 sudo[150909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkfppljhsgkfpeacxxwgdzrxgehgbejz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553078.2292008-778-106963899788/AnsiballZ_stat.py'
Jan 27 22:31:18 compute-0 sudo[150909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:18 compute-0 python3.9[150911]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:18 compute-0 sudo[150909]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:19 compute-0 sudo[151032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxytkzvydrpbuhxtxevrcselwjalsqkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553078.2292008-778-106963899788/AnsiballZ_copy.py'
Jan 27 22:31:19 compute-0 sudo[151032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:19 compute-0 python3.9[151034]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553078.2292008-778-106963899788/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:19 compute-0 sudo[151032]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:19 compute-0 sudo[151184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciscwsaibcagdqngpfxiixqbvwowfejc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553079.448046-778-178140272760486/AnsiballZ_stat.py'
Jan 27 22:31:19 compute-0 sudo[151184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:19 compute-0 python3.9[151186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:19 compute-0 sudo[151184]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:20 compute-0 sudo[151307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkqnpoifuanhbquxpwqsivnapckemdlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553079.448046-778-178140272760486/AnsiballZ_copy.py'
Jan 27 22:31:20 compute-0 sudo[151307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:20 compute-0 python3.9[151309]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553079.448046-778-178140272760486/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:20 compute-0 sudo[151307]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:21 compute-0 sudo[151459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apssjtqxvsaqdgbvjalvracpidkyyely ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553080.7617044-778-223493726836205/AnsiballZ_stat.py'
Jan 27 22:31:21 compute-0 sudo[151459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:21 compute-0 python3.9[151461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:21 compute-0 sudo[151459]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:21 compute-0 sudo[151582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qddgubvpestyhvsuhcrazxtuyywrwxxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553080.7617044-778-223493726836205/AnsiballZ_copy.py'
Jan 27 22:31:21 compute-0 sudo[151582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:21 compute-0 python3.9[151584]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553080.7617044-778-223493726836205/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:21 compute-0 sudo[151582]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:22 compute-0 sudo[151734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smhwcqbhcyuxgnttvtsheysxtrthddoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553082.0395582-778-45587157189140/AnsiballZ_stat.py'
Jan 27 22:31:22 compute-0 sudo[151734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:22 compute-0 python3.9[151736]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:22 compute-0 sudo[151734]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:22 compute-0 sudo[151857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtwgugiqvgboigsrbvlxzavpwlscwcov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553082.0395582-778-45587157189140/AnsiballZ_copy.py'
Jan 27 22:31:22 compute-0 sudo[151857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:23 compute-0 python3.9[151859]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553082.0395582-778-45587157189140/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:23 compute-0 sudo[151857]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:23 compute-0 sudo[152009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itlejsfobjqxuubswatfscrsmebheyoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553083.150164-778-225863475170038/AnsiballZ_stat.py'
Jan 27 22:31:23 compute-0 sudo[152009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:23 compute-0 python3.9[152011]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:23 compute-0 sudo[152009]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:23 compute-0 sudo[152132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptnwtsdwhyutwzjrkmnayrqzffeelues ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553083.150164-778-225863475170038/AnsiballZ_copy.py'
Jan 27 22:31:23 compute-0 sudo[152132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:24 compute-0 python3.9[152134]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553083.150164-778-225863475170038/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:24 compute-0 sudo[152132]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:24 compute-0 sudo[152284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wioaxlqcjmvsjyevazsqkoielodcqbtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553084.247566-778-87918376773314/AnsiballZ_stat.py'
Jan 27 22:31:24 compute-0 sudo[152284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:24 compute-0 python3.9[152286]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:24 compute-0 sudo[152284]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:24 compute-0 sudo[152407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkpgzmsifxqvdomarasrarylprjgwzey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553084.247566-778-87918376773314/AnsiballZ_copy.py'
Jan 27 22:31:24 compute-0 sudo[152407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:25 compute-0 python3.9[152409]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553084.247566-778-87918376773314/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:25 compute-0 sudo[152407]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:25 compute-0 sudo[152559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxybvscdbfsszzgchyfuulnqpmvqnzzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553085.301967-778-91073383257391/AnsiballZ_stat.py'
Jan 27 22:31:25 compute-0 sudo[152559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:25 compute-0 python3.9[152561]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:25 compute-0 sudo[152559]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:26 compute-0 sudo[152682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imyhklyioexzbcahmqsaumhnsghgendv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553085.301967-778-91073383257391/AnsiballZ_copy.py'
Jan 27 22:31:26 compute-0 sudo[152682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:26 compute-0 python3.9[152684]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553085.301967-778-91073383257391/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:26 compute-0 sudo[152682]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:26 compute-0 sudo[152834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvitmfzhrshzmejeohhluwzxvlnrqtuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553086.501299-778-9218704424922/AnsiballZ_stat.py'
Jan 27 22:31:26 compute-0 sudo[152834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:27 compute-0 python3.9[152836]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:27 compute-0 sudo[152834]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:27 compute-0 sudo[152957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeggxbqtodlenojspkwlijgeymweagme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553086.501299-778-9218704424922/AnsiballZ_copy.py'
Jan 27 22:31:27 compute-0 sudo[152957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:27 compute-0 python3.9[152959]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553086.501299-778-9218704424922/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:27 compute-0 sudo[152957]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:28 compute-0 sudo[153109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xerobcpnlphinlvvojyqkixfeovazlel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553087.7597754-778-83598660193264/AnsiballZ_stat.py'
Jan 27 22:31:28 compute-0 sudo[153109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:28 compute-0 python3.9[153111]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:28 compute-0 sudo[153109]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:28 compute-0 sudo[153232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfpuyfrljoezapnkhitrolagezitkhnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553087.7597754-778-83598660193264/AnsiballZ_copy.py'
Jan 27 22:31:28 compute-0 sudo[153232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:28 compute-0 python3.9[153234]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553087.7597754-778-83598660193264/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:28 compute-0 sudo[153232]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:29 compute-0 sudo[153384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikoiwtepvlstrnghbwsddenrwuxqqwab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553088.9158194-778-222398885608362/AnsiballZ_stat.py'
Jan 27 22:31:29 compute-0 sudo[153384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:29 compute-0 python3.9[153386]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:29 compute-0 sudo[153384]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:29 compute-0 sudo[153507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuaoxiiohbixuohdjglnbbxllgpjswia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553088.9158194-778-222398885608362/AnsiballZ_copy.py'
Jan 27 22:31:29 compute-0 sudo[153507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:29 compute-0 python3.9[153509]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553088.9158194-778-222398885608362/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:30 compute-0 sudo[153507]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:30 compute-0 sudo[153659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebcpmbmjgjrzlayrwxwvmrknvyrxyskx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553090.1216059-778-85648203614027/AnsiballZ_stat.py'
Jan 27 22:31:30 compute-0 sudo[153659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:30 compute-0 python3.9[153661]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:30 compute-0 sudo[153659]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:30 compute-0 sudo[153782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mchyickxwvlewirtoepaaaunhhpehbwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553090.1216059-778-85648203614027/AnsiballZ_copy.py'
Jan 27 22:31:30 compute-0 sudo[153782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:31 compute-0 python3.9[153784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553090.1216059-778-85648203614027/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:31 compute-0 sudo[153782]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:31 compute-0 sudo[153934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geslgndajittugooapefqyopvnufvkrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553091.2243881-778-23526518051597/AnsiballZ_stat.py'
Jan 27 22:31:31 compute-0 sudo[153934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:31 compute-0 python3.9[153936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:31 compute-0 sudo[153934]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:31 compute-0 sudo[154057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzdfclhbbnbzaiopwxtymaswbhxuvatf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553091.2243881-778-23526518051597/AnsiballZ_copy.py'
Jan 27 22:31:31 compute-0 sudo[154057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:32 compute-0 python3.9[154059]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553091.2243881-778-23526518051597/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:32 compute-0 sudo[154057]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:32 compute-0 sudo[154209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmzibcqpsaszytjyitzpxhwaqrucdzkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553092.2727094-778-12498055121835/AnsiballZ_stat.py'
Jan 27 22:31:32 compute-0 sudo[154209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:32 compute-0 python3.9[154211]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:32 compute-0 sudo[154209]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:33 compute-0 sudo[154332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrzxjdcufhbzgyycisgpocolknlnnfqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553092.2727094-778-12498055121835/AnsiballZ_copy.py'
Jan 27 22:31:33 compute-0 sudo[154332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:33 compute-0 python3.9[154334]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553092.2727094-778-12498055121835/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:33 compute-0 sudo[154332]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:33 compute-0 python3.9[154484]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:31:34 compute-0 sudo[154637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzcwverwmdicdgqjwmykpexafsgmhzdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553094.200701-984-176987963511014/AnsiballZ_seboolean.py'
Jan 27 22:31:34 compute-0 sudo[154637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:34 compute-0 python3.9[154639]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 27 22:31:36 compute-0 sudo[154637]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:36 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 27 22:31:36 compute-0 podman[154644]: 2026-01-27 22:31:36.101429477 +0000 UTC m=+0.051702186 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:31:36 compute-0 sudo[154814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkswpionanuxurdeqqmygzvtpdyqjlpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553096.1567755-992-32055417427765/AnsiballZ_copy.py'
Jan 27 22:31:36 compute-0 sudo[154814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:36 compute-0 python3.9[154816]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:36 compute-0 sudo[154814]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:37 compute-0 sudo[154966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebttxfbxzbcizsyvemopiuektftymekf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553096.7425456-992-191716421694864/AnsiballZ_copy.py'
Jan 27 22:31:37 compute-0 sudo[154966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:37 compute-0 python3.9[154968]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:37 compute-0 sudo[154966]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:37 compute-0 sudo[155118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyqnhlxuuqigiloheszmevsgfbpgyojm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553097.385786-992-2908238226434/AnsiballZ_copy.py'
Jan 27 22:31:37 compute-0 sudo[155118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:37 compute-0 python3.9[155120]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:37 compute-0 sudo[155118]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:38 compute-0 sudo[155270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdvpvhuyorgrqfmdqdcykdchbqhijptn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553098.1025677-992-5837613201538/AnsiballZ_copy.py'
Jan 27 22:31:38 compute-0 sudo[155270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:38 compute-0 python3.9[155272]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:38 compute-0 sudo[155270]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:39 compute-0 sudo[155422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmvgrjwhbfvehthejiaznwicumaqeour ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553098.902283-992-37833341901140/AnsiballZ_copy.py'
Jan 27 22:31:39 compute-0 sudo[155422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:39 compute-0 python3.9[155424]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:39 compute-0 sudo[155422]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:39 compute-0 sudo[155574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfucgclacntxgrsetwpbcxowpjpittir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553099.5688913-1028-55612407807978/AnsiballZ_copy.py'
Jan 27 22:31:39 compute-0 sudo[155574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:40 compute-0 python3.9[155576]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:40 compute-0 sudo[155574]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:40 compute-0 sudo[155726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyatylzgeewsmdepxswrbxgndobgtjsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553100.187328-1028-210451872980468/AnsiballZ_copy.py'
Jan 27 22:31:40 compute-0 sudo[155726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:40 compute-0 python3.9[155728]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:40 compute-0 sudo[155726]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:40 compute-0 sudo[155878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieybkeubgijkbauuakeaekgculmtfdzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553100.7689152-1028-111645521284177/AnsiballZ_copy.py'
Jan 27 22:31:41 compute-0 sudo[155878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:41 compute-0 python3.9[155880]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:41 compute-0 sudo[155878]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:41 compute-0 sudo[156030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyukockffyltiutwufichvrgpzkizspw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553101.3228972-1028-6227743795479/AnsiballZ_copy.py'
Jan 27 22:31:41 compute-0 sudo[156030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:41 compute-0 python3.9[156032]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:41 compute-0 sudo[156030]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:42 compute-0 sudo[156182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qckfdbpopbcjqpnwsyojzifuoclzjfpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553101.950982-1028-95703397760430/AnsiballZ_copy.py'
Jan 27 22:31:42 compute-0 sudo[156182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:42 compute-0 python3.9[156184]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:42 compute-0 sudo[156182]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:42 compute-0 sudo[156334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmohzlazvmtymsbcyqiernoaiawfjjjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553102.6408088-1064-263852693679407/AnsiballZ_systemd.py'
Jan 27 22:31:42 compute-0 sudo[156334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:43 compute-0 python3.9[156336]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:31:43 compute-0 systemd[1]: Reloading.
Jan 27 22:31:43 compute-0 systemd-sysv-generator[156385]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:31:43 compute-0 systemd-rc-local-generator[156380]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:31:43 compute-0 podman[156338]: 2026-01-27 22:31:43.341679002 +0000 UTC m=+0.100870872 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:31:43 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 27 22:31:43 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 27 22:31:43 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 27 22:31:43 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 27 22:31:43 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 27 22:31:43 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 27 22:31:43 compute-0 sudo[156334]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:44 compute-0 sudo[156554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpeybrorelblaccutljyxxtcanstbhit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553103.8662813-1064-157440374087514/AnsiballZ_systemd.py'
Jan 27 22:31:44 compute-0 sudo[156554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:44 compute-0 python3.9[156556]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:31:44 compute-0 systemd[1]: Reloading.
Jan 27 22:31:44 compute-0 systemd-sysv-generator[156588]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:31:44 compute-0 systemd-rc-local-generator[156585]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:31:44 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 27 22:31:44 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 27 22:31:44 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 27 22:31:44 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 27 22:31:44 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 27 22:31:44 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 27 22:31:44 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 27 22:31:44 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 27 22:31:44 compute-0 sudo[156554]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:45 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 27 22:31:45 compute-0 sudo[156772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laekmggeyoplbbiakclyxvzncmskvkuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553105.0506377-1064-274508215377754/AnsiballZ_systemd.py'
Jan 27 22:31:45 compute-0 sudo[156772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:45 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 27 22:31:45 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 27 22:31:45 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 27 22:31:45 compute-0 python3.9[156774]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:31:45 compute-0 systemd[1]: Reloading.
Jan 27 22:31:45 compute-0 systemd-rc-local-generator[156804]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:31:45 compute-0 systemd-sysv-generator[156807]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:31:46 compute-0 setroubleshoot[156649]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 32d5cf4f-d235-4b64-adc8-a36aa768a04f
Jan 27 22:31:46 compute-0 setroubleshoot[156649]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 27 22:31:46 compute-0 setroubleshoot[156649]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 32d5cf4f-d235-4b64-adc8-a36aa768a04f
Jan 27 22:31:46 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 22:31:46 compute-0 setroubleshoot[156649]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 27 22:31:46 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 27 22:31:46 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 27 22:31:46 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 27 22:31:46 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 27 22:31:46 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 22:31:47 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 22:31:47 compute-0 sudo[156772]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:47 compute-0 sudo[156994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgqezywjettthlkyntwwwwkzvrwgwhfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553107.2064722-1064-184040589374242/AnsiballZ_systemd.py'
Jan 27 22:31:47 compute-0 sudo[156994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:47 compute-0 python3.9[156996]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:31:47 compute-0 systemd[1]: Reloading.
Jan 27 22:31:47 compute-0 systemd-sysv-generator[157021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:31:47 compute-0 systemd-rc-local-generator[157018]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:31:48 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 27 22:31:48 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 27 22:31:48 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 27 22:31:48 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 27 22:31:48 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 27 22:31:48 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 27 22:31:48 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 27 22:31:48 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 27 22:31:48 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 27 22:31:48 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 27 22:31:48 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 27 22:31:48 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 27 22:31:48 compute-0 sudo[156994]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:48 compute-0 sudo[157209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciahsephjniozdpbmpqsccffoxjakbvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553108.4146852-1064-132754545356845/AnsiballZ_systemd.py'
Jan 27 22:31:48 compute-0 sudo[157209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:49 compute-0 python3.9[157211]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:31:49 compute-0 systemd[1]: Reloading.
Jan 27 22:31:49 compute-0 systemd-rc-local-generator[157239]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:31:49 compute-0 systemd-sysv-generator[157242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:31:49 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 27 22:31:49 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 27 22:31:49 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 27 22:31:49 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 27 22:31:49 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 27 22:31:49 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 27 22:31:49 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 27 22:31:49 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 27 22:31:49 compute-0 sudo[157209]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:50 compute-0 sudo[157421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxfbpmaqtzktbpobvxqkcbuuwlumgchx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553109.8785365-1101-109722350140358/AnsiballZ_file.py'
Jan 27 22:31:50 compute-0 sudo[157421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:50 compute-0 python3.9[157423]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:50 compute-0 sudo[157421]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:50 compute-0 sudo[157573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztnpjevvfssnxzyzggusrednpdaietzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553110.52592-1109-91223672277149/AnsiballZ_find.py'
Jan 27 22:31:50 compute-0 sudo[157573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:50 compute-0 python3.9[157575]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 22:31:51 compute-0 sudo[157573]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:51 compute-0 sudo[157725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syecdkevoybhrteqqdngvkvrznleahso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553111.3230531-1123-126052102364774/AnsiballZ_stat.py'
Jan 27 22:31:51 compute-0 sudo[157725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:51 compute-0 python3.9[157727]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:51 compute-0 sudo[157725]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:52 compute-0 sudo[157848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nffqhphdvxjicombkcdowhywzdecjnsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553111.3230531-1123-126052102364774/AnsiballZ_copy.py'
Jan 27 22:31:52 compute-0 sudo[157848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:52 compute-0 python3.9[157850]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553111.3230531-1123-126052102364774/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:52 compute-0 sudo[157848]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:52 compute-0 sudo[158000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bevgedbkvcffidlzktkmeeyhybsqskez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553112.6860516-1139-242116412883097/AnsiballZ_file.py'
Jan 27 22:31:52 compute-0 sudo[158000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:53 compute-0 python3.9[158002]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:53 compute-0 sudo[158000]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:53 compute-0 sudo[158152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvqdrfcwnvadurjsskhyqepxigtmkurp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553113.3367748-1147-22554484546816/AnsiballZ_stat.py'
Jan 27 22:31:53 compute-0 sudo[158152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:53 compute-0 python3.9[158154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:53 compute-0 sudo[158152]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:54 compute-0 sudo[158230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hidxxbfpqzdwilvtqexxwlecxxsjlabf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553113.3367748-1147-22554484546816/AnsiballZ_file.py'
Jan 27 22:31:54 compute-0 sudo[158230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:54 compute-0 python3.9[158232]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:54 compute-0 sudo[158230]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:54 compute-0 sudo[158382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odaztmoeiustwkvnumaesdkreoiipiyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553114.416459-1159-183067766716721/AnsiballZ_stat.py'
Jan 27 22:31:54 compute-0 sudo[158382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:54 compute-0 python3.9[158384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:54 compute-0 sudo[158382]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:55 compute-0 sudo[158460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hotcmtyoaxfspmaxusaxvoalllukqlgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553114.416459-1159-183067766716721/AnsiballZ_file.py'
Jan 27 22:31:55 compute-0 sudo[158460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:55 compute-0 python3.9[158462]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.gb68al5d recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:55 compute-0 sudo[158460]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:56 compute-0 sudo[158612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcshtjodrfhqptbknxowuwlyikvqyoxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553115.8094177-1171-185278561267662/AnsiballZ_stat.py'
Jan 27 22:31:56 compute-0 sudo[158612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:56 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 27 22:31:56 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 27 22:31:56 compute-0 python3.9[158614]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:56 compute-0 sudo[158612]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:56 compute-0 sudo[158690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xstndybvrvyhxsryalxwqtvtijcratai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553115.8094177-1171-185278561267662/AnsiballZ_file.py'
Jan 27 22:31:56 compute-0 sudo[158690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:57 compute-0 python3.9[158692]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:57 compute-0 sudo[158690]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:57 compute-0 sudo[158842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyswwjlbfzlhlwepbvlhgrjvbsueytvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553117.2827287-1184-238479315724235/AnsiballZ_command.py'
Jan 27 22:31:57 compute-0 sudo[158842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:57 compute-0 python3.9[158844]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:31:57 compute-0 sudo[158842]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:58 compute-0 sudo[158995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnjustowtjefiqtkezgxfvigmpofzkyp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553117.9482856-1192-211069806002476/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 22:31:58 compute-0 sudo[158995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:58 compute-0 python3[158997]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 22:31:58 compute-0 sudo[158995]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:59 compute-0 sudo[159147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfnxnwtulannbknghxtuqyfghqbnokbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553118.7805274-1200-114473453662268/AnsiballZ_stat.py'
Jan 27 22:31:59 compute-0 sudo[159147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:59 compute-0 python3.9[159149]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:31:59 compute-0 sudo[159147]: pam_unix(sudo:session): session closed for user root
Jan 27 22:31:59 compute-0 sudo[159225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccpskmygxfdqnsddzlznjefhraisceod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553118.7805274-1200-114473453662268/AnsiballZ_file.py'
Jan 27 22:31:59 compute-0 sudo[159225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:31:59 compute-0 python3.9[159227]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:31:59 compute-0 sudo[159225]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:00 compute-0 sudo[159377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypyhvcqfzkfnswcovoupxcqkqdpldcgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553119.9060395-1212-178593778377826/AnsiballZ_stat.py'
Jan 27 22:32:00 compute-0 sudo[159377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:00 compute-0 python3.9[159379]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:32:00 compute-0 sudo[159377]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:00 compute-0 sudo[159502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upcxtzzouurvoxvfklbtveedyslgvnjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553119.9060395-1212-178593778377826/AnsiballZ_copy.py'
Jan 27 22:32:00 compute-0 sudo[159502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:00 compute-0 python3.9[159504]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553119.9060395-1212-178593778377826/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:00 compute-0 sudo[159502]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:01 compute-0 sudo[159654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obbhgpedgdugyzhpzytdrjowijqcrdrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553121.127274-1227-19553946000909/AnsiballZ_stat.py'
Jan 27 22:32:01 compute-0 sudo[159654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:01 compute-0 python3.9[159656]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:32:01 compute-0 sudo[159654]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:01 compute-0 sudo[159732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhkslskwgcgjjdwsjkxoimrnpttoojtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553121.127274-1227-19553946000909/AnsiballZ_file.py'
Jan 27 22:32:01 compute-0 sudo[159732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:02 compute-0 python3.9[159734]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:02 compute-0 sudo[159732]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:02 compute-0 sudo[159884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzibqfctohoabowqgrmxuvbczhehoglj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553122.3356318-1239-5539572964490/AnsiballZ_stat.py'
Jan 27 22:32:02 compute-0 sudo[159884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:02 compute-0 python3.9[159886]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:32:02 compute-0 sudo[159884]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:03 compute-0 sudo[159962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xojfgcfkgpvrqnnkgpnlrvlrulfddaqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553122.3356318-1239-5539572964490/AnsiballZ_file.py'
Jan 27 22:32:03 compute-0 sudo[159962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:03 compute-0 python3.9[159965]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:03 compute-0 sudo[159962]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:03 compute-0 sudo[160115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmsnvbudaalrdymgtllurjsqhmranyht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553123.4316335-1251-101776786107466/AnsiballZ_stat.py'
Jan 27 22:32:03 compute-0 sudo[160115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:04 compute-0 python3.9[160117]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:32:04 compute-0 sudo[160115]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:32:04.117 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:32:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:32:04.119 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:32:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:32:04.119 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:32:04 compute-0 sudo[160240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmkdaugqgkzlcfnkzknnphnpnxxhyalb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553123.4316335-1251-101776786107466/AnsiballZ_copy.py'
Jan 27 22:32:04 compute-0 sudo[160240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:04 compute-0 python3.9[160242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553123.4316335-1251-101776786107466/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:04 compute-0 sudo[160240]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:05 compute-0 sudo[160392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byugnzuswnzikuoxeeavafrylnzfisyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553124.743302-1266-44137692597042/AnsiballZ_file.py'
Jan 27 22:32:05 compute-0 sudo[160392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:05 compute-0 python3.9[160394]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:05 compute-0 sudo[160392]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:05 compute-0 sudo[160544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stxfpdzmetxbwclxntablxgryodajvdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553125.3916154-1274-187074070520522/AnsiballZ_command.py'
Jan 27 22:32:05 compute-0 sudo[160544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:05 compute-0 python3.9[160546]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:32:05 compute-0 sudo[160544]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:06 compute-0 podman[160649]: 2026-01-27 22:32:06.365092152 +0000 UTC m=+0.055189550 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 22:32:06 compute-0 sudo[160718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xflgzzlcwrzshmaqyjtcmqhcsotpkttl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553126.017586-1282-46694268753441/AnsiballZ_blockinfile.py'
Jan 27 22:32:06 compute-0 sudo[160718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:06 compute-0 python3.9[160720]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:06 compute-0 sudo[160718]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:07 compute-0 sudo[160870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csjqdnogkxwrnjiuuodzmhnoaqlupfyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553126.8424413-1291-32575753427539/AnsiballZ_command.py'
Jan 27 22:32:07 compute-0 sudo[160870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:07 compute-0 python3.9[160872]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:32:07 compute-0 sudo[160870]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:07 compute-0 sudo[161023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbkltfiehpqagbyulaxqnokpqtvdneol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553127.4498756-1299-115974947191122/AnsiballZ_stat.py'
Jan 27 22:32:07 compute-0 sudo[161023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:07 compute-0 python3.9[161025]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:32:07 compute-0 sudo[161023]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:08 compute-0 sudo[161177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnnyrqirsxjeddhaltypmnfqsloqkfrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553128.0622323-1307-71593235451755/AnsiballZ_command.py'
Jan 27 22:32:08 compute-0 sudo[161177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:08 compute-0 python3.9[161179]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:32:08 compute-0 sudo[161177]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:08 compute-0 sudo[161332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sakjvfczeiisaxiuiqmlcnhkmqwcdupe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553128.6691983-1315-106717641677245/AnsiballZ_file.py'
Jan 27 22:32:08 compute-0 sudo[161332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:09 compute-0 python3.9[161334]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:09 compute-0 sudo[161332]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:09 compute-0 sudo[161484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akbrbwwjpbjvcmqwghckvadejcbzhkor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553129.273415-1323-52171045174094/AnsiballZ_stat.py'
Jan 27 22:32:09 compute-0 sudo[161484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:09 compute-0 python3.9[161486]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:32:09 compute-0 sudo[161484]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:10 compute-0 sudo[161607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjaecvyfvimpbauxbygokfjgzwzxcagi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553129.273415-1323-52171045174094/AnsiballZ_copy.py'
Jan 27 22:32:10 compute-0 sudo[161607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:10 compute-0 python3.9[161609]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553129.273415-1323-52171045174094/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:10 compute-0 sudo[161607]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:10 compute-0 sudo[161759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqglenxqxsxyktoexvdmkketrjzydwxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553130.4734647-1338-81312685059802/AnsiballZ_stat.py'
Jan 27 22:32:10 compute-0 sudo[161759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:10 compute-0 python3.9[161761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:32:10 compute-0 sudo[161759]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:11 compute-0 sudo[161882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajtsmumownvekldpjjmvjzkhvuresiea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553130.4734647-1338-81312685059802/AnsiballZ_copy.py'
Jan 27 22:32:11 compute-0 sudo[161882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:11 compute-0 python3.9[161884]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553130.4734647-1338-81312685059802/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:11 compute-0 sudo[161882]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:11 compute-0 sudo[162034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mytgbjhlxoytuumzsxqhxjrzckhyidia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553131.6665308-1353-90240186774318/AnsiballZ_stat.py'
Jan 27 22:32:11 compute-0 sudo[162034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:12 compute-0 python3.9[162036]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:32:12 compute-0 sudo[162034]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:12 compute-0 sudo[162157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsorflbzgogpbrruzmombryqkpxjpqwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553131.6665308-1353-90240186774318/AnsiballZ_copy.py'
Jan 27 22:32:12 compute-0 sudo[162157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:12 compute-0 python3.9[162159]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553131.6665308-1353-90240186774318/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:12 compute-0 sudo[162157]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:13 compute-0 sudo[162309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbimmdgngzncnhppwsnmbjdenptigble ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553132.9219348-1368-234926467528801/AnsiballZ_systemd.py'
Jan 27 22:32:13 compute-0 sudo[162309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:13 compute-0 python3.9[162311]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:32:13 compute-0 systemd[1]: Reloading.
Jan 27 22:32:13 compute-0 systemd-sysv-generator[162373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:32:13 compute-0 systemd-rc-local-generator[162370]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:32:13 compute-0 podman[162313]: 2026-01-27 22:32:13.663183739 +0000 UTC m=+0.126049431 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 22:32:13 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 27 22:32:13 compute-0 sudo[162309]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:14 compute-0 sudo[162528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdkqlptqxjarlktsqcyjeigpjygjawnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553134.0691109-1376-253376370424386/AnsiballZ_systemd.py'
Jan 27 22:32:14 compute-0 sudo[162528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:14 compute-0 python3.9[162530]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 27 22:32:14 compute-0 systemd[1]: Reloading.
Jan 27 22:32:14 compute-0 systemd-sysv-generator[162557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:32:14 compute-0 systemd-rc-local-generator[162553]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:32:15 compute-0 systemd[1]: Reloading.
Jan 27 22:32:15 compute-0 systemd-rc-local-generator[162593]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:32:15 compute-0 systemd-sysv-generator[162598]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:32:15 compute-0 sudo[162528]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:15 compute-0 sshd-session[107876]: Connection closed by 192.168.122.30 port 33608
Jan 27 22:32:15 compute-0 sshd-session[107873]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:32:15 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Jan 27 22:32:15 compute-0 systemd[1]: session-22.scope: Consumed 3min 21.034s CPU time.
Jan 27 22:32:15 compute-0 systemd-logind[789]: Session 22 logged out. Waiting for processes to exit.
Jan 27 22:32:15 compute-0 systemd-logind[789]: Removed session 22.
Jan 27 22:32:21 compute-0 sshd-session[162626]: Accepted publickey for zuul from 192.168.122.30 port 54354 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:32:21 compute-0 systemd-logind[789]: New session 23 of user zuul.
Jan 27 22:32:21 compute-0 systemd[1]: Started Session 23 of User zuul.
Jan 27 22:32:21 compute-0 sshd-session[162626]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:32:22 compute-0 sshd-session[162682]: Received disconnect from 45.148.10.151 port 44298:11:  [preauth]
Jan 27 22:32:22 compute-0 sshd-session[162682]: Disconnected from authenticating user root 45.148.10.151 port 44298 [preauth]
Jan 27 22:32:22 compute-0 python3.9[162781]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:32:23 compute-0 python3.9[162935]: ansible-ansible.builtin.service_facts Invoked
Jan 27 22:32:23 compute-0 network[162952]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 22:32:23 compute-0 network[162953]: 'network-scripts' will be removed from distribution in near future.
Jan 27 22:32:23 compute-0 network[162954]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 22:32:29 compute-0 sudo[163223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvrwxatckjtmjoqwfcovgigpsweqowpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553149.0570424-42-249079877003200/AnsiballZ_setup.py'
Jan 27 22:32:29 compute-0 sudo[163223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:29 compute-0 python3.9[163225]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:32:30 compute-0 sudo[163223]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:30 compute-0 sudo[163307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toycpteyyasohvphtgwhcxjgxpmunzge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553149.0570424-42-249079877003200/AnsiballZ_dnf.py'
Jan 27 22:32:30 compute-0 sudo[163307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:30 compute-0 python3.9[163309]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:32:34 compute-0 sshd-session[163311]: Invalid user user from 45.148.10.121 port 34128
Jan 27 22:32:34 compute-0 sshd-session[163311]: Connection closed by invalid user user 45.148.10.121 port 34128 [preauth]
Jan 27 22:32:35 compute-0 sudo[163307]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:36 compute-0 sudo[163462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubcxetkyqirczgaqbuzskreqbasmervl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553155.9933305-54-122485077142906/AnsiballZ_stat.py'
Jan 27 22:32:36 compute-0 sudo[163462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:36 compute-0 podman[163464]: 2026-01-27 22:32:36.531166412 +0000 UTC m=+0.077178670 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 22:32:36 compute-0 python3.9[163465]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:32:36 compute-0 sudo[163462]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:37 compute-0 sudo[163635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pakkxayeamshtyanjynvshnseyqoksbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553156.912198-64-7899668509303/AnsiballZ_command.py'
Jan 27 22:32:37 compute-0 sudo[163635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:37 compute-0 python3.9[163637]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:32:37 compute-0 sudo[163635]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:37 compute-0 sudo[163788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rijihhmcczvogrnwxzzewyxrarhnvqyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553157.7334316-74-197264531736430/AnsiballZ_stat.py'
Jan 27 22:32:37 compute-0 sudo[163788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:38 compute-0 python3.9[163790]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:32:38 compute-0 sudo[163788]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:38 compute-0 sudo[163940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gchumuekjqlxrpkomdvjgnpjgolvomvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553158.3090837-82-214023264882952/AnsiballZ_command.py'
Jan 27 22:32:38 compute-0 sudo[163940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:38 compute-0 python3.9[163942]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:32:38 compute-0 sudo[163940]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:39 compute-0 sudo[164093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lluhsyfataxcbzvbpgpockhqdylxktgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553158.9894474-90-171332750288452/AnsiballZ_stat.py'
Jan 27 22:32:39 compute-0 sudo[164093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:39 compute-0 python3.9[164095]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:32:39 compute-0 sudo[164093]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:40 compute-0 sudo[164216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gundyqeuqhozthxenbugdjvcuqinwpzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553158.9894474-90-171332750288452/AnsiballZ_copy.py'
Jan 27 22:32:40 compute-0 sudo[164216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:40 compute-0 python3.9[164218]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553158.9894474-90-171332750288452/.source.iscsi _original_basename=.tax3v5em follow=False checksum=8d43fa7374798ecf354233b4f70743507bf57913 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:40 compute-0 sudo[164216]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:41 compute-0 sudo[164368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bytswethgefmgjznffuejhnnvavrcezt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553160.4092314-105-18596692366420/AnsiballZ_file.py'
Jan 27 22:32:41 compute-0 sudo[164368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:41 compute-0 python3.9[164370]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:41 compute-0 sudo[164368]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:41 compute-0 sudo[164520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpsucbozdqjeffcotbppgbvzptozmckk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553161.4005387-113-276300048352969/AnsiballZ_lineinfile.py'
Jan 27 22:32:41 compute-0 sudo[164520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:41 compute-0 python3.9[164522]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:42 compute-0 sudo[164520]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:42 compute-0 sudo[164672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sncxsbhzitdxiqkwcmcmrgwdwynhsqse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553162.1949134-122-200713070876541/AnsiballZ_systemd_service.py'
Jan 27 22:32:42 compute-0 sudo[164672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:43 compute-0 python3.9[164674]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:32:43 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 27 22:32:43 compute-0 sudo[164672]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:43 compute-0 sudo[164828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnksesitvpdulfulchujymlacjljnsre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553163.248972-130-120204099862758/AnsiballZ_systemd_service.py'
Jan 27 22:32:43 compute-0 sudo[164828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:43 compute-0 python3.9[164830]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:32:43 compute-0 systemd[1]: Reloading.
Jan 27 22:32:43 compute-0 systemd-rc-local-generator[164878]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:32:43 compute-0 systemd-sysv-generator[164883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:32:43 compute-0 podman[164834]: 2026-01-27 22:32:43.980052571 +0000 UTC m=+0.118965586 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 22:32:44 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 27 22:32:44 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 27 22:32:44 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 27 22:32:44 compute-0 systemd[1]: Started Open-iSCSI.
Jan 27 22:32:44 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 27 22:32:44 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 27 22:32:44 compute-0 sudo[164828]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:45 compute-0 python3.9[165053]: ansible-ansible.builtin.service_facts Invoked
Jan 27 22:32:45 compute-0 network[165070]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 22:32:45 compute-0 network[165071]: 'network-scripts' will be removed from distribution in near future.
Jan 27 22:32:45 compute-0 network[165072]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 22:32:49 compute-0 sudo[165341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuvcfychvcucdsknhgliejakxcunmcvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553169.509535-153-260944753499318/AnsiballZ_dnf.py'
Jan 27 22:32:49 compute-0 sudo[165341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:49 compute-0 python3.9[165343]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:32:52 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 22:32:52 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 22:32:52 compute-0 systemd[1]: Reloading.
Jan 27 22:32:52 compute-0 systemd-rc-local-generator[165382]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:32:52 compute-0 systemd-sysv-generator[165385]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:32:52 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 22:32:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 22:32:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 22:32:52 compute-0 systemd[1]: run-r9fad5ec7d0ee474e83fc7485769a3f3d.service: Deactivated successfully.
Jan 27 22:32:53 compute-0 sudo[165341]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:53 compute-0 sudo[165657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztjrxsguesrtqtngfocwohzdhyqqmwod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553173.4228945-162-74705863754894/AnsiballZ_file.py'
Jan 27 22:32:53 compute-0 sudo[165657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:53 compute-0 python3.9[165659]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 27 22:32:53 compute-0 sudo[165657]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:54 compute-0 sudo[165809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usfvypbvnatnvslsgpckpavpcysfyvkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553174.0085783-170-63345819196391/AnsiballZ_modprobe.py'
Jan 27 22:32:54 compute-0 sudo[165809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:54 compute-0 python3.9[165811]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 27 22:32:54 compute-0 sudo[165809]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:55 compute-0 sudo[165965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzxpzybzewidbgrhgahmriwkxidaxmxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553175.1254692-178-129905133924853/AnsiballZ_stat.py'
Jan 27 22:32:55 compute-0 sudo[165965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:55 compute-0 python3.9[165967]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:32:55 compute-0 sudo[165965]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:56 compute-0 sudo[166088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keyivlohbnspletowcysxqpoqlvfgngc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553175.1254692-178-129905133924853/AnsiballZ_copy.py'
Jan 27 22:32:56 compute-0 sudo[166088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:56 compute-0 python3.9[166090]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553175.1254692-178-129905133924853/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:56 compute-0 sudo[166088]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:56 compute-0 sudo[166240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chqlyufcyonkjjxnxthgwvcfxjdnegkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553176.6526053-194-17940795040236/AnsiballZ_lineinfile.py'
Jan 27 22:32:56 compute-0 sudo[166240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:57 compute-0 python3.9[166242]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:32:57 compute-0 sudo[166240]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:58 compute-0 sudo[166392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzogfefizdiznjzmukenlkyykeqmbfhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553177.2556074-202-183810287611725/AnsiballZ_systemd.py'
Jan 27 22:32:58 compute-0 sudo[166392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:58 compute-0 python3.9[166394]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:32:58 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 27 22:32:58 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 27 22:32:58 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 27 22:32:58 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 22:32:58 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 22:32:58 compute-0 sudo[166392]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:58 compute-0 sudo[166548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkifqxyfmisnkofhzzerhraextllosmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553178.5264397-210-178146964344701/AnsiballZ_command.py'
Jan 27 22:32:58 compute-0 sudo[166548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:59 compute-0 python3.9[166550]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:32:59 compute-0 sudo[166548]: pam_unix(sudo:session): session closed for user root
Jan 27 22:32:59 compute-0 sudo[166701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjebwhyuxjecabcdadjvasyqdadqwjjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553179.364749-220-199022567632176/AnsiballZ_stat.py'
Jan 27 22:32:59 compute-0 sudo[166701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:32:59 compute-0 python3.9[166703]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:32:59 compute-0 sudo[166701]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:00 compute-0 sudo[166853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndkfmeuwmdvvekujlkkpmsimggwxrvrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553179.9789999-229-49369241961114/AnsiballZ_stat.py'
Jan 27 22:33:00 compute-0 sudo[166853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:00 compute-0 python3.9[166855]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:33:00 compute-0 sudo[166853]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:00 compute-0 sudo[166976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzekwudthvdqlclwjulpvkiocpucbkee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553179.9789999-229-49369241961114/AnsiballZ_copy.py'
Jan 27 22:33:00 compute-0 sudo[166976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:00 compute-0 python3.9[166978]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553179.9789999-229-49369241961114/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:00 compute-0 sudo[166976]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:01 compute-0 sudo[167128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eijvpyxfunqamoxckgddogujkwmvlabb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553181.073401-244-262719026837995/AnsiballZ_command.py'
Jan 27 22:33:01 compute-0 sudo[167128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:01 compute-0 python3.9[167130]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:33:01 compute-0 sudo[167128]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:01 compute-0 sudo[167281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duirlwcpkzfakumdhdehmkcbuctwvafe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553181.6841905-252-274773500565019/AnsiballZ_lineinfile.py'
Jan 27 22:33:01 compute-0 sudo[167281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:02 compute-0 python3.9[167283]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:02 compute-0 sudo[167281]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:02 compute-0 sudo[167433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxybyguepjqoruzndwogzbqqffvvfwxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553182.3594565-260-43561522351990/AnsiballZ_replace.py'
Jan 27 22:33:02 compute-0 sudo[167433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:02 compute-0 python3.9[167435]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:02 compute-0 sudo[167433]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:03 compute-0 sudo[167585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwepxcrphuuipvyvauqbolgtvyvdonwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553183.1374652-268-18324591628718/AnsiballZ_replace.py'
Jan 27 22:33:03 compute-0 sudo[167585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:03 compute-0 python3.9[167587]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:03 compute-0 sudo[167585]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:04 compute-0 sudo[167737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lafyvqyydgpixlfjwjlmnrssxcefbeui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553183.778714-277-187936671958067/AnsiballZ_lineinfile.py'
Jan 27 22:33:04 compute-0 sudo[167737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:33:04.118 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:33:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:33:04.120 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:33:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:33:04.120 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:33:04 compute-0 python3.9[167739]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:04 compute-0 sudo[167737]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:04 compute-0 sudo[167889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfzxegqulmykxhmqlckryujlvgorcpqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553184.4025364-277-64779011153954/AnsiballZ_lineinfile.py'
Jan 27 22:33:04 compute-0 sudo[167889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:04 compute-0 python3.9[167891]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:04 compute-0 sudo[167889]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:05 compute-0 sudo[168041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxqlooqqqxljteqdiiejvdjskuuwvtdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553185.044607-277-193136376728554/AnsiballZ_lineinfile.py'
Jan 27 22:33:05 compute-0 sudo[168041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:05 compute-0 python3.9[168043]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:05 compute-0 sudo[168041]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:05 compute-0 sudo[168193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imoogpacjzovewwlgqnqsdmipvdmouwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553185.66331-277-261189424225536/AnsiballZ_lineinfile.py'
Jan 27 22:33:05 compute-0 sudo[168193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:06 compute-0 python3.9[168195]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:06 compute-0 sudo[168193]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:06 compute-0 sudo[168355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miqimhfsypffpcgpfxiucgltrntybinb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553186.390662-306-47567111795626/AnsiballZ_stat.py'
Jan 27 22:33:06 compute-0 sudo[168355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:06 compute-0 podman[168319]: 2026-01-27 22:33:06.833861477 +0000 UTC m=+0.106197148 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 22:33:07 compute-0 python3.9[168363]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:33:07 compute-0 sudo[168355]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:07 compute-0 sudo[168515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vznfcmlikmjgzdpdzowbyrjsgglifbqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553187.185255-314-150344642493602/AnsiballZ_command.py'
Jan 27 22:33:07 compute-0 sudo[168515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:07 compute-0 python3.9[168517]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:33:07 compute-0 sudo[168515]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:08 compute-0 sudo[168668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewqqcttxwbfwplzzqraeduarzvbbemve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553187.829304-323-31484088274642/AnsiballZ_systemd_service.py'
Jan 27 22:33:08 compute-0 sudo[168668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:08 compute-0 python3.9[168670]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:33:08 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 27 22:33:08 compute-0 sudo[168668]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:08 compute-0 sudo[168824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-speyhskvabjgxemxabonlphtizrfecyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553188.5978038-331-212006565265940/AnsiballZ_systemd_service.py'
Jan 27 22:33:08 compute-0 sudo[168824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:09 compute-0 python3.9[168826]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:33:09 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 27 22:33:09 compute-0 udevadm[168831]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 27 22:33:09 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 27 22:33:09 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 27 22:33:09 compute-0 multipathd[168834]: --------start up--------
Jan 27 22:33:09 compute-0 multipathd[168834]: read /etc/multipath.conf
Jan 27 22:33:09 compute-0 multipathd[168834]: path checkers start up
Jan 27 22:33:09 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 27 22:33:09 compute-0 sudo[168824]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:09 compute-0 sudo[168991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffridrzoprazpqbcfqggocimrtcyxpgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553189.648032-343-184401815187833/AnsiballZ_file.py'
Jan 27 22:33:09 compute-0 sudo[168991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:10 compute-0 python3.9[168993]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 27 22:33:10 compute-0 sudo[168991]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:10 compute-0 sudo[169143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oobznplijumuqbvjgsrpemieqkbmeufm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553190.2858393-351-255232359249787/AnsiballZ_modprobe.py'
Jan 27 22:33:10 compute-0 sudo[169143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:10 compute-0 python3.9[169145]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 27 22:33:10 compute-0 kernel: Key type psk registered
Jan 27 22:33:10 compute-0 sudo[169143]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:11 compute-0 sudo[169304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rztzjktnduntfylwqwvgpzgicezelpqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553191.110412-359-127552449095033/AnsiballZ_stat.py'
Jan 27 22:33:11 compute-0 sudo[169304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:11 compute-0 python3.9[169306]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:33:11 compute-0 sudo[169304]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:12 compute-0 sudo[169427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eblvqpfupudtdnsouystwdkncfbqahdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553191.110412-359-127552449095033/AnsiballZ_copy.py'
Jan 27 22:33:12 compute-0 sudo[169427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:12 compute-0 python3.9[169429]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553191.110412-359-127552449095033/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:12 compute-0 sudo[169427]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:12 compute-0 sudo[169579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgargjcalyozpkozffzdswoxdlwiakbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553192.4650393-375-3894311938583/AnsiballZ_lineinfile.py'
Jan 27 22:33:12 compute-0 sudo[169579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:12 compute-0 python3.9[169581]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:12 compute-0 sudo[169579]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:13 compute-0 sudo[169731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itfoafzvjqorurryujsvhopsbwkrmizd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553193.037287-383-4098720592885/AnsiballZ_systemd.py'
Jan 27 22:33:13 compute-0 sudo[169731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:13 compute-0 python3.9[169733]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:33:13 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 27 22:33:13 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 27 22:33:13 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 27 22:33:13 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 27 22:33:13 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 27 22:33:13 compute-0 sudo[169731]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:14 compute-0 sudo[169901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emeeebwewwiwppdlntioksgnmcgbnkek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553193.9311104-391-156205435636722/AnsiballZ_dnf.py'
Jan 27 22:33:14 compute-0 sudo[169901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:14 compute-0 podman[169861]: 2026-01-27 22:33:14.28624837 +0000 UTC m=+0.091322051 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 22:33:14 compute-0 python3.9[169908]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:33:16 compute-0 systemd[1]: Reloading.
Jan 27 22:33:16 compute-0 systemd-rc-local-generator[169944]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:33:16 compute-0 systemd-sysv-generator[169948]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:33:16 compute-0 systemd[1]: Reloading.
Jan 27 22:33:16 compute-0 systemd-sysv-generator[169980]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:33:16 compute-0 systemd-rc-local-generator[169976]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:33:17 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 27 22:33:17 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 27 22:33:17 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 27 22:33:17 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 27 22:33:17 compute-0 systemd[1]: Reloading.
Jan 27 22:33:17 compute-0 systemd-sysv-generator[170082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:33:17 compute-0 systemd-rc-local-generator[170077]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:33:17 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 27 22:33:18 compute-0 sudo[169901]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:18 compute-0 sudo[171217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czhkoeuywaukivlavlnbdbbehkbjfibq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553198.3864434-399-165881359220680/AnsiballZ_systemd_service.py'
Jan 27 22:33:18 compute-0 sudo[171217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:18 compute-0 python3.9[171238]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:33:19 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 27 22:33:19 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 27 22:33:19 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.821s CPU time.
Jan 27 22:33:19 compute-0 systemd[1]: run-r67a3af751a98401bac0edca995e64a8f.service: Deactivated successfully.
Jan 27 22:33:19 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 27 22:33:19 compute-0 iscsid[164895]: iscsid shutting down.
Jan 27 22:33:19 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 27 22:33:19 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 27 22:33:19 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 27 22:33:19 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 27 22:33:19 compute-0 systemd[1]: Started Open-iSCSI.
Jan 27 22:33:19 compute-0 sudo[171217]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:19 compute-0 sudo[171534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qevxvppidtefrmbkfawdrgoxjohuyyqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553199.2144654-407-133899273843071/AnsiballZ_systemd_service.py'
Jan 27 22:33:19 compute-0 sudo[171534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:19 compute-0 python3.9[171536]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:33:19 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 27 22:33:19 compute-0 multipathd[168834]: exit (signal)
Jan 27 22:33:19 compute-0 multipathd[168834]: --------shut down-------
Jan 27 22:33:19 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 27 22:33:19 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 27 22:33:19 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 27 22:33:19 compute-0 multipathd[171543]: --------start up--------
Jan 27 22:33:19 compute-0 multipathd[171543]: read /etc/multipath.conf
Jan 27 22:33:19 compute-0 multipathd[171543]: path checkers start up
Jan 27 22:33:19 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 27 22:33:19 compute-0 sudo[171534]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:20 compute-0 python3.9[171700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:33:21 compute-0 sudo[171854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnudnddizfmerszflopjonuapipjuunp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553201.1936162-425-63648211796122/AnsiballZ_file.py'
Jan 27 22:33:21 compute-0 sudo[171854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:21 compute-0 python3.9[171856]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:21 compute-0 sudo[171854]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:22 compute-0 sudo[172006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gunedlcsfiyetexnpczpyouwullbmtoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553202.011249-436-16474707099307/AnsiballZ_systemd_service.py'
Jan 27 22:33:22 compute-0 sudo[172006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:22 compute-0 python3.9[172008]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:33:22 compute-0 systemd[1]: Reloading.
Jan 27 22:33:22 compute-0 systemd-rc-local-generator[172033]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:33:22 compute-0 systemd-sysv-generator[172036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:33:23 compute-0 sudo[172006]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:23 compute-0 python3.9[172193]: ansible-ansible.builtin.service_facts Invoked
Jan 27 22:33:24 compute-0 network[172210]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 22:33:24 compute-0 network[172211]: 'network-scripts' will be removed from distribution in near future.
Jan 27 22:33:24 compute-0 network[172212]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 22:33:29 compute-0 sudo[172482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rigotbvnqwcdsfgjnorfrtmembwwbjnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553209.5361984-455-194171367316793/AnsiballZ_systemd_service.py'
Jan 27 22:33:29 compute-0 sudo[172482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:30 compute-0 python3.9[172484]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:33:30 compute-0 sudo[172482]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:30 compute-0 sudo[172635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kckncbbgobwdzrozncfahpfzsrqndlui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553210.3139896-455-109201953086202/AnsiballZ_systemd_service.py'
Jan 27 22:33:30 compute-0 sudo[172635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:30 compute-0 python3.9[172637]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:33:30 compute-0 sudo[172635]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:31 compute-0 sudo[172788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrvmjwnihrbhxbiqxvelzyzpfschyrnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553211.0411663-455-118496803080056/AnsiballZ_systemd_service.py'
Jan 27 22:33:31 compute-0 sudo[172788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:31 compute-0 python3.9[172790]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:33:31 compute-0 sudo[172788]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:32 compute-0 sudo[172941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgyawsrzqiugowuztzqutpngvhhbmohd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553211.7866852-455-23394750641558/AnsiballZ_systemd_service.py'
Jan 27 22:33:32 compute-0 sudo[172941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:32 compute-0 python3.9[172943]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:33:32 compute-0 sudo[172941]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:32 compute-0 sudo[173094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idqxbdnjumoldyjxxuvuffxeezmbeavh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553212.6228907-455-80562494880266/AnsiballZ_systemd_service.py'
Jan 27 22:33:32 compute-0 sudo[173094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:33 compute-0 python3.9[173096]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:33:33 compute-0 sudo[173094]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:33 compute-0 sudo[173247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfsjfevreqjomcvpzjnhdzaqyyfcyyie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553213.3074172-455-241055429504776/AnsiballZ_systemd_service.py'
Jan 27 22:33:33 compute-0 sudo[173247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:33 compute-0 python3.9[173249]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:33:33 compute-0 sudo[173247]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:34 compute-0 sudo[173400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvovmtzldihqgwkgzmtcoapsjzlfxiec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553214.0812685-455-7671122058971/AnsiballZ_systemd_service.py'
Jan 27 22:33:34 compute-0 sudo[173400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:34 compute-0 python3.9[173402]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:33:34 compute-0 sudo[173400]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:35 compute-0 sudo[173553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nflxocfsdzuwrpnxbjcbhwdkjzrjorln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553214.843753-455-153044614233209/AnsiballZ_systemd_service.py'
Jan 27 22:33:35 compute-0 sudo[173553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:35 compute-0 python3.9[173555]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:33:35 compute-0 sudo[173553]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:36 compute-0 sudo[173706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-advmjorowlmbblmgdgfqauoxycgrartb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553215.815362-514-258123783743837/AnsiballZ_file.py'
Jan 27 22:33:36 compute-0 sudo[173706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:36 compute-0 python3.9[173708]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:36 compute-0 sudo[173706]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:36 compute-0 sudo[173858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxkrudbomkwoegnejwgpejlpnrlidptn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553216.3872843-514-96064018874985/AnsiballZ_file.py'
Jan 27 22:33:36 compute-0 sudo[173858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:36 compute-0 python3.9[173860]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:36 compute-0 sudo[173858]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:37 compute-0 podman[173973]: 2026-01-27 22:33:37.360365002 +0000 UTC m=+0.060329169 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 27 22:33:37 compute-0 sudo[174029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oespugmohlqniltmoephrprkfvzgppoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553217.0774572-514-202905959195554/AnsiballZ_file.py'
Jan 27 22:33:37 compute-0 sudo[174029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:37 compute-0 python3.9[174031]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:37 compute-0 sudo[174029]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:37 compute-0 sudo[174181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilvxspbntsrxwehmqzobvnupabjzdouy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553217.6934214-514-186395871110628/AnsiballZ_file.py'
Jan 27 22:33:37 compute-0 sudo[174181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:38 compute-0 python3.9[174183]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:38 compute-0 sudo[174181]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:38 compute-0 sudo[174333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dybnzxtlsplqwfaeudfdcmwztzsgmgty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553218.2218962-514-81329560127125/AnsiballZ_file.py'
Jan 27 22:33:38 compute-0 sudo[174333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:38 compute-0 python3.9[174335]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:38 compute-0 sudo[174333]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:39 compute-0 sudo[174485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyrfmefxxhfruxcrebcogdymtqdzvizt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553218.7712486-514-192936212520836/AnsiballZ_file.py'
Jan 27 22:33:39 compute-0 sudo[174485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:39 compute-0 python3.9[174487]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:39 compute-0 sudo[174485]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:39 compute-0 sudo[174637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuskrkbxfruklvbjfhanstnpwvxpnpft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553219.3889778-514-241016508428917/AnsiballZ_file.py'
Jan 27 22:33:39 compute-0 sudo[174637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:39 compute-0 python3.9[174639]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:39 compute-0 sudo[174637]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:40 compute-0 sudo[174789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iradaykpnwuqltwexczudodaovnvbcts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553219.9524858-514-162155166375070/AnsiballZ_file.py'
Jan 27 22:33:40 compute-0 sudo[174789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:40 compute-0 python3.9[174791]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:40 compute-0 sudo[174789]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:40 compute-0 sudo[174941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhmnglsbcdlgpfblbiiodxcloifckznd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553220.5711567-571-27420028438726/AnsiballZ_file.py'
Jan 27 22:33:40 compute-0 sudo[174941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:41 compute-0 python3.9[174943]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:41 compute-0 sudo[174941]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:41 compute-0 sudo[175093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvirkkfegugravstvmyqsbzesegfayri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553221.2522192-571-162363471711981/AnsiballZ_file.py'
Jan 27 22:33:41 compute-0 sudo[175093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:41 compute-0 python3.9[175095]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:41 compute-0 sudo[175093]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:42 compute-0 sudo[175245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poseytznoovmzcbtvnkisexgoiuzchth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553221.8437138-571-4076286972367/AnsiballZ_file.py'
Jan 27 22:33:42 compute-0 sudo[175245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:42 compute-0 python3.9[175247]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:42 compute-0 sudo[175245]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:42 compute-0 sudo[175397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofqoijxoiivjtivbyrlzokaybsvccsyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553222.4912448-571-195906815426402/AnsiballZ_file.py'
Jan 27 22:33:42 compute-0 sudo[175397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:42 compute-0 python3.9[175399]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:42 compute-0 sudo[175397]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:43 compute-0 sudo[175549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrhehywmmcvymwoymkydlrzgxnybykct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553223.0598817-571-238170283778803/AnsiballZ_file.py'
Jan 27 22:33:43 compute-0 sudo[175549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:43 compute-0 python3.9[175551]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:43 compute-0 sudo[175549]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:43 compute-0 sudo[175701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdjlngdiirafyjgdbpdkayrthsssegum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553223.6542425-571-117479686509856/AnsiballZ_file.py'
Jan 27 22:33:43 compute-0 sudo[175701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:44 compute-0 python3.9[175703]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:44 compute-0 sudo[175701]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:44 compute-0 sudo[175868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pihjfdjvcunuliszougnwiyogakfmohn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553224.2369952-571-38807742126187/AnsiballZ_file.py'
Jan 27 22:33:44 compute-0 sudo[175868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:44 compute-0 podman[175827]: 2026-01-27 22:33:44.57175334 +0000 UTC m=+0.092721670 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 27 22:33:44 compute-0 python3.9[175873]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:44 compute-0 sudo[175868]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:44 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 27 22:33:45 compute-0 sudo[176032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twhcqoyffdwqkaspmqautyeurztvryfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553224.886105-571-41727772921843/AnsiballZ_file.py'
Jan 27 22:33:45 compute-0 sudo[176032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:45 compute-0 python3.9[176034]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:33:45 compute-0 sudo[176032]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:45 compute-0 sudo[176184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osvuoshipjvzbnbdhljgwhpkguxgjyvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553225.611761-629-90652491880563/AnsiballZ_command.py'
Jan 27 22:33:45 compute-0 sudo[176184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:46 compute-0 python3.9[176186]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:33:46 compute-0 sudo[176184]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:46 compute-0 python3.9[176338]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 22:33:47 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 22:33:47 compute-0 sudo[176489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iouxgjpqnrimyerzwoqpgguopsmzerqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553227.102121-647-24610161161586/AnsiballZ_systemd_service.py'
Jan 27 22:33:47 compute-0 sudo[176489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:47 compute-0 python3.9[176491]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:33:47 compute-0 systemd[1]: Reloading.
Jan 27 22:33:47 compute-0 systemd-rc-local-generator[176518]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:33:47 compute-0 systemd-sysv-generator[176521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:33:48 compute-0 sudo[176489]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:48 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 27 22:33:48 compute-0 sudo[176676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hagmljcqpcythkswvbyktvothxwnjbkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553228.1779335-655-196154894973832/AnsiballZ_command.py'
Jan 27 22:33:48 compute-0 sudo[176676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:48 compute-0 python3.9[176678]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:33:48 compute-0 sudo[176676]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:49 compute-0 sudo[176829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osnenvwylchxrvbyovyymogxltcqsiit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553228.8727813-655-172085754506888/AnsiballZ_command.py'
Jan 27 22:33:49 compute-0 sudo[176829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:49 compute-0 python3.9[176831]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:33:49 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 27 22:33:49 compute-0 sudo[176829]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:49 compute-0 sudo[176983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfsedryxgwjxsxqopkemvgkewzilsqsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553229.681764-655-234928385476262/AnsiballZ_command.py'
Jan 27 22:33:49 compute-0 sudo[176983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:50 compute-0 python3.9[176985]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:33:50 compute-0 sudo[176983]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:50 compute-0 sudo[177136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmixqsovdaywfgsvpclmamdruozbrbzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553230.329075-655-14899404987444/AnsiballZ_command.py'
Jan 27 22:33:50 compute-0 sudo[177136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:50 compute-0 python3.9[177138]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:33:50 compute-0 sudo[177136]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:51 compute-0 sudo[177289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgwstguykezjtwcnkhwtpgqcoaymizoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553230.903318-655-205679521166873/AnsiballZ_command.py'
Jan 27 22:33:51 compute-0 sudo[177289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:51 compute-0 python3.9[177291]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:33:51 compute-0 sudo[177289]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:51 compute-0 sudo[177442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuybnlpuuvfpnirudrifodjpcunisyes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553231.656426-655-168283703692650/AnsiballZ_command.py'
Jan 27 22:33:51 compute-0 sudo[177442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:52 compute-0 python3.9[177444]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:33:52 compute-0 sudo[177442]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:52 compute-0 sudo[177595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tznofzrkjeyeoutzibfjstjzfsuxnlbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553232.2486959-655-99119930102810/AnsiballZ_command.py'
Jan 27 22:33:52 compute-0 sudo[177595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:52 compute-0 python3.9[177597]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:33:52 compute-0 sudo[177595]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:53 compute-0 sudo[177748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nclebujvxuobxzqdpufeappwjwmzkbwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553232.922236-655-169412151037796/AnsiballZ_command.py'
Jan 27 22:33:53 compute-0 sudo[177748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:53 compute-0 python3.9[177750]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:33:53 compute-0 sudo[177748]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:54 compute-0 sudo[177901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opfzxfcybaljqfpdmsvdfnwsupzxlqfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553234.4401598-734-245592332148969/AnsiballZ_file.py'
Jan 27 22:33:54 compute-0 sudo[177901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:55 compute-0 python3.9[177903]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:33:55 compute-0 sudo[177901]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:55 compute-0 sudo[178053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmlfznkxsibifzfazglrjyfrkcoyldsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553235.3224986-734-11589442587725/AnsiballZ_file.py'
Jan 27 22:33:55 compute-0 sudo[178053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:55 compute-0 python3.9[178055]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:33:55 compute-0 sudo[178053]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:56 compute-0 sudo[178205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwkwgupflyxhyixjlpydveianiswlvoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553235.8849738-734-214712021232424/AnsiballZ_file.py'
Jan 27 22:33:56 compute-0 sudo[178205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:56 compute-0 python3.9[178207]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:33:56 compute-0 sudo[178205]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:56 compute-0 sudo[178357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfsmyfoayrrfxgmqqapiiuynlvekgqpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553236.4526856-756-40219366002313/AnsiballZ_file.py'
Jan 27 22:33:56 compute-0 sudo[178357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:56 compute-0 python3.9[178359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:33:56 compute-0 sudo[178357]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:57 compute-0 sudo[178509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqxgcnlmtnkiolvpfchkttoyeiimbjoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553237.0201814-756-4608397799238/AnsiballZ_file.py'
Jan 27 22:33:57 compute-0 sudo[178509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:57 compute-0 python3.9[178511]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:33:57 compute-0 sudo[178509]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:58 compute-0 sudo[178661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyosnxalxoopkwuhewdhudmzcpnbshzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553237.6225424-756-219657227420246/AnsiballZ_file.py'
Jan 27 22:33:58 compute-0 sudo[178661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:58 compute-0 python3.9[178663]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:33:58 compute-0 sudo[178661]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:58 compute-0 sudo[178813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubtfgdbuzlnfglusnggerayskqrymhmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553238.4537497-756-228487572324763/AnsiballZ_file.py'
Jan 27 22:33:58 compute-0 sudo[178813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:58 compute-0 python3.9[178815]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:33:58 compute-0 sudo[178813]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:59 compute-0 sudo[178965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scdpmuzavicibefngntpifpixncmxjep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553239.1062317-756-240853268920771/AnsiballZ_file.py'
Jan 27 22:33:59 compute-0 sudo[178965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:33:59 compute-0 python3.9[178967]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:33:59 compute-0 sudo[178965]: pam_unix(sudo:session): session closed for user root
Jan 27 22:33:59 compute-0 sudo[179117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrbptmhjlxkevpvdesooxyzrdvmuikmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553239.7440991-756-97322949868243/AnsiballZ_file.py'
Jan 27 22:33:59 compute-0 sudo[179117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:00 compute-0 python3.9[179119]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:34:00 compute-0 sudo[179117]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:00 compute-0 sudo[179269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyycmurfbsqrikuwmiwreepquavrilkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553240.3736246-756-216815069982890/AnsiballZ_file.py'
Jan 27 22:34:00 compute-0 sudo[179269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:00 compute-0 python3.9[179271]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:34:00 compute-0 sudo[179269]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:34:04.119 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:34:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:34:04.121 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:34:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:34:04.121 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:34:05 compute-0 sudo[179421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njfyarfprfrscfknzhhhirajvpyyfysu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553244.8520675-925-70376845174722/AnsiballZ_getent.py'
Jan 27 22:34:05 compute-0 sudo[179421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:05 compute-0 python3.9[179423]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 27 22:34:05 compute-0 sudo[179421]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:06 compute-0 sudo[179574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdbirxyzpwtmgusughzfahyaenvpqavq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553245.6999233-933-243401988834077/AnsiballZ_group.py'
Jan 27 22:34:06 compute-0 sudo[179574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:06 compute-0 python3.9[179576]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 22:34:06 compute-0 groupadd[179577]: group added to /etc/group: name=nova, GID=42436
Jan 27 22:34:06 compute-0 groupadd[179577]: group added to /etc/gshadow: name=nova
Jan 27 22:34:06 compute-0 groupadd[179577]: new group: name=nova, GID=42436
Jan 27 22:34:06 compute-0 sudo[179574]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:06 compute-0 sudo[179732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abqykjngrjrhoheyctfszzvjmnikxobx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553246.5174336-941-122496201997483/AnsiballZ_user.py'
Jan 27 22:34:06 compute-0 sudo[179732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:07 compute-0 python3.9[179734]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 22:34:07 compute-0 useradd[179736]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 27 22:34:07 compute-0 useradd[179736]: add 'nova' to group 'libvirt'
Jan 27 22:34:07 compute-0 useradd[179736]: add 'nova' to shadow group 'libvirt'
Jan 27 22:34:07 compute-0 sudo[179732]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:08 compute-0 sshd-session[179767]: Accepted publickey for zuul from 192.168.122.30 port 41126 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:34:08 compute-0 systemd-logind[789]: New session 24 of user zuul.
Jan 27 22:34:08 compute-0 systemd[1]: Started Session 24 of User zuul.
Jan 27 22:34:08 compute-0 sshd-session[179767]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:34:08 compute-0 podman[179769]: 2026-01-27 22:34:08.221579582 +0000 UTC m=+0.059836977 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 27 22:34:08 compute-0 sshd-session[179784]: Received disconnect from 192.168.122.30 port 41126:11: disconnected by user
Jan 27 22:34:08 compute-0 sshd-session[179784]: Disconnected from user zuul 192.168.122.30 port 41126
Jan 27 22:34:08 compute-0 sshd-session[179767]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:34:08 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 27 22:34:08 compute-0 systemd-logind[789]: Session 24 logged out. Waiting for processes to exit.
Jan 27 22:34:08 compute-0 systemd-logind[789]: Removed session 24.
Jan 27 22:34:08 compute-0 python3.9[179937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:34:09 compute-0 python3.9[180058]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553248.4746888-966-173887124572843/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:34:10 compute-0 python3.9[180208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:34:10 compute-0 python3.9[180284]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:34:10 compute-0 python3.9[180434]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:34:11 compute-0 python3.9[180555]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553250.5523622-966-221788832316614/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:34:12 compute-0 python3.9[180705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:34:12 compute-0 python3.9[180826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553251.6964207-966-225964105593824/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:34:13 compute-0 python3.9[180976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:34:13 compute-0 python3.9[181097]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553252.879029-966-256217000252728/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:34:14 compute-0 python3.9[181247]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:34:14 compute-0 podman[181342]: 2026-01-27 22:34:14.783058889 +0000 UTC m=+0.081770392 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 22:34:14 compute-0 python3.9[181383]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553253.971985-966-105541789433022/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:34:15 compute-0 sudo[181545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdwillcchehhgnmcrmxtpmgoktydeulu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553255.2075744-1049-193828911036310/AnsiballZ_file.py'
Jan 27 22:34:15 compute-0 sudo[181545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:15 compute-0 python3.9[181547]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:34:15 compute-0 sudo[181545]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:16 compute-0 sudo[181697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kslhudcvsbmmytniqepbtolwbujvhlci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553255.945397-1057-72965433752183/AnsiballZ_copy.py'
Jan 27 22:34:16 compute-0 sudo[181697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:16 compute-0 python3.9[181699]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:34:16 compute-0 sudo[181697]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:16 compute-0 sudo[181849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvnmkzqsiadestlpgbpyrebalbabmefv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553256.6616342-1065-141524504822510/AnsiballZ_stat.py'
Jan 27 22:34:17 compute-0 sudo[181849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:17 compute-0 python3.9[181851]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:34:17 compute-0 sudo[181849]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:17 compute-0 sudo[182001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-narukexcpxstkkljacakknfmrqbugrld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553257.4187243-1073-177777579967151/AnsiballZ_stat.py'
Jan 27 22:34:17 compute-0 sudo[182001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:17 compute-0 python3.9[182003]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:34:17 compute-0 sudo[182001]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:18 compute-0 sudo[182124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxezyrqrqbtvbvbmgvrmljfhmjaqqmzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553257.4187243-1073-177777579967151/AnsiballZ_copy.py'
Jan 27 22:34:18 compute-0 sudo[182124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:18 compute-0 python3.9[182126]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769553257.4187243-1073-177777579967151/.source _original_basename=.jm27etx0 follow=False checksum=18ae2680e98276cc248b77cde7a3341151aab380 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 27 22:34:18 compute-0 sudo[182124]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:19 compute-0 python3.9[182278]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:34:19 compute-0 python3.9[182430]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:34:20 compute-0 python3.9[182551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553259.33576-1099-5978275094713/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:34:21 compute-0 python3.9[182701]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:34:21 compute-0 python3.9[182822]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553260.6218164-1114-121708596764145/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:34:22 compute-0 sudo[182972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mekdnhviavclhtytpgwzlcmlfuezgccq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553262.0822718-1131-86526753256274/AnsiballZ_container_config_data.py'
Jan 27 22:34:22 compute-0 sudo[182972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:22 compute-0 python3.9[182974]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 27 22:34:22 compute-0 sudo[182972]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:23 compute-0 sudo[183124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhttnfubcdlmqekbynytybcwbvakwujx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553263.094158-1142-104728707121736/AnsiballZ_container_config_hash.py'
Jan 27 22:34:23 compute-0 sudo[183124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:23 compute-0 python3.9[183126]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 22:34:23 compute-0 sudo[183124]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:24 compute-0 sudo[183276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzkwejlbhqyndlngbojuhkoldhwpivcy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553264.0250123-1152-182867943991813/AnsiballZ_edpm_container_manage.py'
Jan 27 22:34:24 compute-0 sudo[183276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:25 compute-0 python3[183278]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 22:34:25 compute-0 podman[183313]: 2026-01-27 22:34:25.20663488 +0000 UTC m=+0.050310323 container create fa6e890ad5d53a78eca9b3facb1f6726bccdc9766984baf66345503133ef5412 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 27 22:34:25 compute-0 podman[183313]: 2026-01-27 22:34:25.18198253 +0000 UTC m=+0.025657983 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 22:34:25 compute-0 python3[183278]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 27 22:34:25 compute-0 sudo[183276]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:25 compute-0 sudo[183501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egbpusvuqptruklirarqtfwskkodgtyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553265.5063112-1160-216904165941888/AnsiballZ_stat.py'
Jan 27 22:34:25 compute-0 sudo[183501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:25 compute-0 python3.9[183503]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:34:26 compute-0 sudo[183501]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:26 compute-0 sudo[183655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmfcbwnxubyuuvomvilinlotzpdjzjeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553266.4815328-1172-265168992776940/AnsiballZ_container_config_data.py'
Jan 27 22:34:26 compute-0 sudo[183655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:27 compute-0 python3.9[183657]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 27 22:34:27 compute-0 sudo[183655]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:27 compute-0 sudo[183807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kibgikkwbfgzoayovvcnkgcuyxdwigsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553267.341603-1183-158191723614568/AnsiballZ_container_config_hash.py'
Jan 27 22:34:27 compute-0 sudo[183807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:27 compute-0 python3.9[183809]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 22:34:27 compute-0 sudo[183807]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:28 compute-0 sudo[183959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxhmhhavqphxfjeuvfjajewjsnvypwny ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553268.0666523-1193-281451645099044/AnsiballZ_edpm_container_manage.py'
Jan 27 22:34:28 compute-0 sudo[183959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:28 compute-0 python3[183961]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 22:34:28 compute-0 podman[183999]: 2026-01-27 22:34:28.864439981 +0000 UTC m=+0.055314340 container create da103180c47380ff29c2682126100d6e6038082b4b04cb558542dfdf6b659154 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Jan 27 22:34:28 compute-0 podman[183999]: 2026-01-27 22:34:28.836797637 +0000 UTC m=+0.027672106 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 27 22:34:28 compute-0 python3[183961]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 27 22:34:29 compute-0 sudo[183959]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:29 compute-0 sudo[184186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkhxevgwutkcwtrljjjhqryvimlkbjxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553269.2375877-1201-49366504098602/AnsiballZ_stat.py'
Jan 27 22:34:29 compute-0 sudo[184186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:29 compute-0 python3.9[184188]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:34:29 compute-0 sudo[184186]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:30 compute-0 sudo[184340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fziarvpijdrgebpwmbwodzlzlowzhoiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553269.940679-1210-141673262927509/AnsiballZ_file.py'
Jan 27 22:34:30 compute-0 sudo[184340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:30 compute-0 python3.9[184342]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:34:30 compute-0 sudo[184340]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:30 compute-0 sudo[184491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivsuqhubbqwybczjtqfnjyyjckpowwgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553270.4975002-1210-268481375272119/AnsiballZ_copy.py'
Jan 27 22:34:30 compute-0 sudo[184491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:31 compute-0 python3.9[184493]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769553270.4975002-1210-268481375272119/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:34:31 compute-0 sudo[184491]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:31 compute-0 sudo[184567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mstwwqdldepcupaayhrtdxtcaisopgxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553270.4975002-1210-268481375272119/AnsiballZ_systemd.py'
Jan 27 22:34:31 compute-0 sudo[184567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:31 compute-0 python3.9[184569]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:34:31 compute-0 systemd[1]: Reloading.
Jan 27 22:34:31 compute-0 systemd-sysv-generator[184600]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:34:31 compute-0 systemd-rc-local-generator[184596]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:34:31 compute-0 sudo[184567]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:32 compute-0 sudo[184678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tccnirodpqccijoqyverzkvcxthdamzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553270.4975002-1210-268481375272119/AnsiballZ_systemd.py'
Jan 27 22:34:32 compute-0 sudo[184678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:32 compute-0 python3.9[184680]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:34:32 compute-0 systemd[1]: Reloading.
Jan 27 22:34:32 compute-0 systemd-rc-local-generator[184710]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:34:32 compute-0 systemd-sysv-generator[184713]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:34:32 compute-0 systemd[1]: Starting nova_compute container...
Jan 27 22:34:32 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60631a951d81501126afac98b20add501780c15b5f3709bfec3aa7901b3d86af/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60631a951d81501126afac98b20add501780c15b5f3709bfec3aa7901b3d86af/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60631a951d81501126afac98b20add501780c15b5f3709bfec3aa7901b3d86af/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60631a951d81501126afac98b20add501780c15b5f3709bfec3aa7901b3d86af/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60631a951d81501126afac98b20add501780c15b5f3709bfec3aa7901b3d86af/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:33 compute-0 podman[184720]: 2026-01-27 22:34:33.003098049 +0000 UTC m=+0.090065055 container init da103180c47380ff29c2682126100d6e6038082b4b04cb558542dfdf6b659154 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:34:33 compute-0 podman[184720]: 2026-01-27 22:34:33.008771493 +0000 UTC m=+0.095738489 container start da103180c47380ff29c2682126100d6e6038082b4b04cb558542dfdf6b659154 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Jan 27 22:34:33 compute-0 podman[184720]: nova_compute
Jan 27 22:34:33 compute-0 nova_compute[184735]: + sudo -E kolla_set_configs
Jan 27 22:34:33 compute-0 systemd[1]: Started nova_compute container.
Jan 27 22:34:33 compute-0 sudo[184678]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Validating config file
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Copying service configuration files
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Deleting /etc/ceph
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Creating directory /etc/ceph
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /etc/ceph
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Writing out command to execute
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 22:34:33 compute-0 nova_compute[184735]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 22:34:33 compute-0 nova_compute[184735]: ++ cat /run_command
Jan 27 22:34:33 compute-0 nova_compute[184735]: + CMD=nova-compute
Jan 27 22:34:33 compute-0 nova_compute[184735]: + ARGS=
Jan 27 22:34:33 compute-0 nova_compute[184735]: + sudo kolla_copy_cacerts
Jan 27 22:34:33 compute-0 nova_compute[184735]: + [[ ! -n '' ]]
Jan 27 22:34:33 compute-0 nova_compute[184735]: + . kolla_extend_start
Jan 27 22:34:33 compute-0 nova_compute[184735]: Running command: 'nova-compute'
Jan 27 22:34:33 compute-0 nova_compute[184735]: + echo 'Running command: '\''nova-compute'\'''
Jan 27 22:34:33 compute-0 nova_compute[184735]: + umask 0022
Jan 27 22:34:33 compute-0 nova_compute[184735]: + exec nova-compute
Jan 27 22:34:33 compute-0 python3.9[184896]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:34:34 compute-0 python3.9[185047]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:34:35 compute-0 nova_compute[184735]: 2026-01-27 22:34:35.035 184739 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 22:34:35 compute-0 nova_compute[184735]: 2026-01-27 22:34:35.035 184739 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 22:34:35 compute-0 nova_compute[184735]: 2026-01-27 22:34:35.035 184739 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 22:34:35 compute-0 nova_compute[184735]: 2026-01-27 22:34:35.035 184739 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 27 22:34:35 compute-0 nova_compute[184735]: 2026-01-27 22:34:35.170 184739 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:34:35 compute-0 nova_compute[184735]: 2026-01-27 22:34:35.192 184739 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:34:35 compute-0 nova_compute[184735]: 2026-01-27 22:34:35.192 184739 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 27 22:34:35 compute-0 python3.9[185199]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:34:35 compute-0 nova_compute[184735]: 2026-01-27 22:34:35.895 184739 INFO nova.virt.driver [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.013 184739 INFO nova.compute.provider_config [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.026 184739 DEBUG oslo_concurrency.lockutils [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.026 184739 DEBUG oslo_concurrency.lockutils [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.027 184739 DEBUG oslo_concurrency.lockutils [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.027 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.027 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.027 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.027 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.028 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.028 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.028 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.028 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.028 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.028 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.028 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.029 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.029 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.029 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.029 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.029 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.029 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.029 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.030 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.030 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.030 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.030 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.030 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.030 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.030 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.031 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.031 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.031 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.031 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.031 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.031 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.032 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.032 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.032 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.032 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.032 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.032 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.033 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.033 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.033 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.033 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.033 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.034 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.034 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.034 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.034 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.034 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.034 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.034 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.035 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.035 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.035 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.035 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.035 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.035 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.035 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.036 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.036 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.036 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.036 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.036 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.036 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.036 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.037 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.037 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.037 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.037 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.037 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.037 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.037 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.038 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.038 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.038 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.038 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.038 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.038 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.038 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.039 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.039 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.039 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.039 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.039 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.039 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.039 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.040 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.040 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.040 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.040 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.040 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.040 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.040 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.041 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.041 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.041 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.041 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.041 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.041 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.041 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.041 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.042 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.042 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.042 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.042 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.042 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.042 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.042 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.043 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.043 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.043 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.043 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.043 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.043 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.044 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.044 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 sudo[185351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otftwluijfbjvipgymtuoymdlnxjphcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553275.5474536-1270-144100256932690/AnsiballZ_podman_container.py'
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.044 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.044 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.044 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.044 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.045 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.045 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.045 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.045 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.045 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.046 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.046 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.046 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.046 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.046 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 sudo[185351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.047 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.047 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.047 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.047 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.047 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.048 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.048 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.048 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.048 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.048 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.048 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.048 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.049 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.049 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.049 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.049 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.049 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.049 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.049 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.050 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.050 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.050 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.050 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.050 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.050 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.050 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.051 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.051 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.051 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.051 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.051 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.051 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.051 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.052 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.052 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.052 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.052 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.052 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.052 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.053 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.053 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.053 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.053 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.053 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.053 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.053 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.054 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.054 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.054 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.054 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.054 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.054 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.054 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.055 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.055 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.055 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.055 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.055 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.055 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.056 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.056 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.056 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.056 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.056 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.056 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.056 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.057 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.057 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.057 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.057 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.057 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.057 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.057 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.058 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.058 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.058 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.058 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.058 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.058 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.058 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.058 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.059 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.059 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.059 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.059 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.059 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.059 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.059 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.060 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.060 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.060 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.060 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.060 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.060 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.061 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.061 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.061 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.061 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.061 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.061 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.061 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.062 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.062 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.062 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.062 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.062 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.062 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.062 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.063 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.063 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.063 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.063 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.063 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.063 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.064 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.064 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.064 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.064 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.064 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.064 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.065 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.065 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.065 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.065 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.065 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.066 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.066 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.066 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.066 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.066 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.067 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.067 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.067 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.067 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.067 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.067 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.067 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.068 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.068 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.068 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.068 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.068 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.068 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.068 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.069 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.069 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.069 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.069 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.069 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.069 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.069 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.070 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.070 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.070 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.070 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.070 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.070 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.070 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.071 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.071 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.071 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.071 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.071 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.071 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.071 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.071 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.072 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.072 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.072 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.072 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.072 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.072 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.072 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.073 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.073 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.073 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.073 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.073 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.073 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.073 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.074 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.074 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.074 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.074 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.074 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.074 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.074 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.075 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.075 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.075 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.075 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.075 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.075 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.075 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.076 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.076 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.076 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.076 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.076 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.076 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.076 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.077 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.077 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.077 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.077 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.077 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.077 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.077 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.078 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.078 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.078 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.078 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.078 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.078 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.078 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.079 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.079 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.079 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.079 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.079 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.079 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.079 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.080 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.080 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.080 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.080 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.080 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.080 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.081 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.081 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.081 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.081 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.081 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.082 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.082 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.082 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.082 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.082 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.082 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.083 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.083 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.083 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.083 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.083 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.084 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.084 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.084 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.084 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.084 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.084 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.084 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.085 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.085 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.085 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.085 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.085 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.085 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.085 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.086 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.086 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.086 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.086 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.086 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.086 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.086 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.087 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.087 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.087 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.087 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.087 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.087 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.087 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.088 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.088 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.088 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.088 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.088 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.088 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.088 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.089 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.089 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.089 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.089 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.089 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.089 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.089 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.090 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.090 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.090 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.090 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.090 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.090 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.090 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.091 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.091 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.091 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.091 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.091 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.091 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.092 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.092 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.092 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.092 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.092 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.092 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.092 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.093 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.093 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.093 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.093 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.093 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.093 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.093 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.094 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.094 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.094 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.094 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.094 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.094 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.094 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.094 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.095 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.095 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.095 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.095 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.095 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.095 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.096 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.096 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.096 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.096 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.096 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.096 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.096 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.097 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.097 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.097 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.097 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.097 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.097 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.097 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.098 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.098 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.098 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.098 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.098 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.098 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.098 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.099 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.099 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.099 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.099 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.099 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.099 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.099 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.100 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.100 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.100 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.100 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.100 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.100 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.101 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.101 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.101 184739 WARNING oslo_config.cfg [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 27 22:34:36 compute-0 nova_compute[184735]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 27 22:34:36 compute-0 nova_compute[184735]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 27 22:34:36 compute-0 nova_compute[184735]: and ``live_migration_inbound_addr`` respectively.
Jan 27 22:34:36 compute-0 nova_compute[184735]: ).  Its value may be silently ignored in the future.
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.101 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.101 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.101 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.102 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.102 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.102 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.102 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.102 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.102 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.103 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.103 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.103 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.103 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.103 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.103 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.103 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.104 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.104 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.104 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.104 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.104 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.104 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.104 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.105 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.105 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.105 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.105 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.105 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.105 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.105 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.106 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.106 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.106 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.106 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.106 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.106 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.107 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.107 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.107 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.107 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.107 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.107 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.107 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.108 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.108 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.108 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.108 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.108 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.108 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.108 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.109 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.109 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.109 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.109 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.109 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.109 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.109 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.110 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.110 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.110 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.110 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.110 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.110 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.111 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.111 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.111 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.111 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.111 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.111 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.111 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.112 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.112 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.112 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.112 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.112 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.112 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.112 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.113 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.113 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.113 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.113 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.113 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.113 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.114 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.114 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.114 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.114 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.114 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.114 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.114 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.115 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.115 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.115 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.115 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.115 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.115 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.115 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.115 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.116 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.116 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.116 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.116 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.116 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.116 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.116 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.117 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.117 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.117 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.117 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.117 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.117 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.117 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.118 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.118 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.118 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.118 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.118 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.118 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.118 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.119 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.119 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.119 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.119 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.119 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.119 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.119 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.120 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.120 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.120 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.120 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.120 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.120 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.120 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.121 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.121 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.121 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.121 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.121 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.121 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.122 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.122 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.122 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.122 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.122 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.122 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.122 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.123 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.123 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.123 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.123 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.123 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.123 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.123 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.124 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.124 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.124 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.124 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.124 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.124 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.124 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.125 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.125 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.125 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.125 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.125 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.125 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.126 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.126 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.126 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.126 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.126 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.126 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.126 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.127 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.127 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.127 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.127 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.127 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.127 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.128 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.128 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.128 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.128 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.128 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.128 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.129 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.129 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.129 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.129 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.129 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.129 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.129 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.130 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.130 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.130 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.130 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.130 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.130 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.130 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.131 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.131 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.131 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.131 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.131 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.131 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.131 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.132 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.132 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.132 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.132 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.132 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.132 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.132 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.133 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.133 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.133 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.133 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.133 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.133 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.134 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.134 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.134 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.134 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.134 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.134 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.134 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.135 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.135 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.135 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.135 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.135 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.135 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.135 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.135 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.136 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.136 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.136 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.136 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.136 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.136 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.136 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.137 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.137 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.137 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.137 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.137 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.137 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.138 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.138 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.138 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.138 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.138 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.138 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.139 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.139 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.139 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.139 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.139 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.139 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.139 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.140 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.140 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.140 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.140 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.140 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.140 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.140 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.141 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.141 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.141 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.141 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.141 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.141 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.141 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.142 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.142 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.142 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.142 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.142 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.142 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.142 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.143 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.143 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.143 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.143 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.143 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.143 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.143 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.144 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.144 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.144 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.144 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.144 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.144 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.145 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.145 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.145 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.145 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.145 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.146 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.146 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.146 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.146 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.146 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.146 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.146 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.147 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.147 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.147 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.147 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.147 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.148 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.148 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.148 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.148 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.148 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.148 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.149 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.149 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.149 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.149 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.149 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.149 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.149 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.150 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.150 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.150 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.150 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.150 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.150 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.150 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.151 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.151 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.151 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.151 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.151 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.151 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.152 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.152 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.152 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.152 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.152 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.152 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.152 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.153 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.153 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.153 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.153 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.153 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.153 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.153 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.154 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.154 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.154 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.154 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.154 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.154 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.154 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.155 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.155 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.155 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.155 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.155 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.155 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.155 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.156 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.156 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.156 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.156 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.156 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.156 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.156 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.157 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.157 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.157 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.157 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.157 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.157 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.157 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.158 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.158 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.158 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.158 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.158 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.158 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.158 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.158 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.159 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.159 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.159 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.159 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.159 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.159 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.159 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.160 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.160 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.160 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.160 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.160 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.160 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.161 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.161 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.161 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.161 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.161 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.161 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.161 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.162 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.162 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.162 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.162 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.162 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.162 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.162 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.162 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.163 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.163 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.163 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.163 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.163 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.163 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.163 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.164 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.164 184739 DEBUG oslo_service.service [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.165 184739 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.179 184739 DEBUG nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.179 184739 DEBUG nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.179 184739 DEBUG nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.180 184739 DEBUG nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 27 22:34:36 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 27 22:34:36 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.248 184739 DEBUG nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fdcd4eb0c40> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.251 184739 DEBUG nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fdcd4eb0c40> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.252 184739 INFO nova.virt.libvirt.driver [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Connection event '1' reason 'None'
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.273 184739 WARNING nova.virt.libvirt.driver [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 27 22:34:36 compute-0 nova_compute[184735]: 2026-01-27 22:34:36.273 184739 DEBUG nova.virt.libvirt.volume.mount [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 27 22:34:36 compute-0 python3.9[185353]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 27 22:34:36 compute-0 sudo[185351]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:36 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 22:34:36 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 22:34:36 compute-0 sudo[185585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeuysxqhylkesvytkvornmuylblkjrxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553276.5464993-1278-18475244876858/AnsiballZ_systemd.py'
Jan 27 22:34:36 compute-0 sudo[185585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:37 compute-0 nova_compute[184735]: 2026-01-27 22:34:37.016 184739 INFO nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Libvirt host capabilities <capabilities>
Jan 27 22:34:37 compute-0 nova_compute[184735]: 
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <host>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <uuid>3d6f6630-1343-4c09-b459-1f5514c0a933</uuid>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <cpu>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <arch>x86_64</arch>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model>EPYC-Rome-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <vendor>AMD</vendor>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <microcode version='16777317'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <signature family='23' model='49' stepping='0'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='x2apic'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='tsc-deadline'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='osxsave'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='hypervisor'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='tsc_adjust'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='spec-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='stibp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='arch-capabilities'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='ssbd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='cmp_legacy'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='topoext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='virt-ssbd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='lbrv'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='tsc-scale'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='vmcb-clean'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='pause-filter'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='pfthreshold'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='svme-addr-chk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='rdctl-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='skip-l1dfl-vmentry'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='mds-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature name='pschange-mc-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <pages unit='KiB' size='4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <pages unit='KiB' size='2048'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <pages unit='KiB' size='1048576'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </cpu>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <power_management>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <suspend_mem/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <suspend_disk/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <suspend_hybrid/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </power_management>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <iommu support='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <migration_features>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <live/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <uri_transports>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <uri_transport>tcp</uri_transport>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <uri_transport>rdma</uri_transport>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </uri_transports>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </migration_features>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <topology>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <cells num='1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <cell id='0'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:           <memory unit='KiB'>7864308</memory>
Jan 27 22:34:37 compute-0 nova_compute[184735]:           <pages unit='KiB' size='4'>1966077</pages>
Jan 27 22:34:37 compute-0 nova_compute[184735]:           <pages unit='KiB' size='2048'>0</pages>
Jan 27 22:34:37 compute-0 nova_compute[184735]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 27 22:34:37 compute-0 nova_compute[184735]:           <distances>
Jan 27 22:34:37 compute-0 nova_compute[184735]:             <sibling id='0' value='10'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:           </distances>
Jan 27 22:34:37 compute-0 nova_compute[184735]:           <cpus num='8'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:           </cpus>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         </cell>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </cells>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </topology>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <cache>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </cache>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <secmodel>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model>selinux</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <doi>0</doi>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </secmodel>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <secmodel>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model>dac</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <doi>0</doi>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </secmodel>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </host>
Jan 27 22:34:37 compute-0 nova_compute[184735]: 
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <guest>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <os_type>hvm</os_type>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <arch name='i686'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <wordsize>32</wordsize>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <domain type='qemu'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <domain type='kvm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </arch>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <features>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <pae/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <nonpae/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <acpi default='on' toggle='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <apic default='on' toggle='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <cpuselection/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <deviceboot/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <disksnapshot default='on' toggle='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <externalSnapshot/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </features>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </guest>
Jan 27 22:34:37 compute-0 nova_compute[184735]: 
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <guest>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <os_type>hvm</os_type>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <arch name='x86_64'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <wordsize>64</wordsize>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <domain type='qemu'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <domain type='kvm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </arch>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <features>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <acpi default='on' toggle='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <apic default='on' toggle='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <cpuselection/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <deviceboot/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <disksnapshot default='on' toggle='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <externalSnapshot/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </features>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </guest>
Jan 27 22:34:37 compute-0 nova_compute[184735]: 
Jan 27 22:34:37 compute-0 nova_compute[184735]: </capabilities>
Jan 27 22:34:37 compute-0 nova_compute[184735]: 
Jan 27 22:34:37 compute-0 nova_compute[184735]: 2026-01-27 22:34:37.023 184739 DEBUG nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 22:34:37 compute-0 nova_compute[184735]: 2026-01-27 22:34:37.042 184739 DEBUG nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 27 22:34:37 compute-0 nova_compute[184735]: <domainCapabilities>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <domain>kvm</domain>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <arch>i686</arch>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <vcpu max='240'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <iothreads supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <os supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <enum name='firmware'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <loader supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>rom</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pflash</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='readonly'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>yes</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>no</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='secure'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>no</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </loader>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </os>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <cpu>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='host-passthrough' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='hostPassthroughMigratable'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>on</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>off</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='maximum' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='maximumMigratable'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>on</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>off</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='host-model' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <vendor>AMD</vendor>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='x2apic'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='hypervisor'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='stibp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='ssbd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='overflow-recov'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='succor'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='lbrv'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='tsc-scale'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='flushbyasid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='pause-filter'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='pfthreshold'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='disable' name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='custom' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='ClearwaterForest'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ddpd-u'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sha512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm3'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='ClearwaterForest-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ddpd-u'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sha512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm3'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cooperlake'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cooperlake-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cooperlake-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Dhyana-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Genoa'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='perfmon-v2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Turin'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='perfmon-v2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbpb'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Turin-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='perfmon-v2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbpb'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-128'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-256'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-128'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-256'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v6'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v7'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='KnightsMill'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512er'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512pf'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='KnightsMill-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512er'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512pf'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G4-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tbm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G5-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tbm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 python3.9[185587]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='athlon'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='athlon-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='core2duo'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='core2duo-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='coreduo'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='coreduo-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='n270'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='n270-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='phenom'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='phenom-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </cpu>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <memoryBacking supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <enum name='sourceType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>file</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>anonymous</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>memfd</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </memoryBacking>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <devices>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <disk supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='diskDevice'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>disk</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>cdrom</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>floppy</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>lun</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='bus'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>ide</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>fdc</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>scsi</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>usb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>sata</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-non-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </disk>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <graphics supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vnc</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>egl-headless</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>dbus</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </graphics>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <video supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='modelType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vga</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>cirrus</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>none</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>bochs</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>ramfb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </video>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <hostdev supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='mode'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>subsystem</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='startupPolicy'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>default</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>mandatory</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>requisite</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>optional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='subsysType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>usb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pci</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>scsi</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='capsType'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='pciBackend'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </hostdev>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <rng supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-non-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendModel'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>random</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>egd</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>builtin</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </rng>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <filesystem supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='driverType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>path</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>handle</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtiofs</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </filesystem>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <tpm supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tpm-tis</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tpm-crb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendModel'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>emulator</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>external</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendVersion'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>2.0</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </tpm>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <redirdev supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='bus'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>usb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </redirdev>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <channel supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pty</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>unix</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </channel>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <crypto supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>qemu</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendModel'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>builtin</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </crypto>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <interface supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>default</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>passt</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </interface>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <panic supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>isa</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>hyperv</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </panic>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <console supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>null</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vc</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pty</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>dev</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>file</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pipe</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>stdio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>udp</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tcp</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>unix</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>qemu-vdagent</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>dbus</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </console>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </devices>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <features>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <gic supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <vmcoreinfo supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <genid supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <backingStoreInput supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <backup supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <async-teardown supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <s390-pv supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <ps2 supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <tdx supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <sev supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <sgx supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <hyperv supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='features'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>relaxed</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vapic</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>spinlocks</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vpindex</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>runtime</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>synic</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>stimer</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>reset</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vendor_id</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>frequencies</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>reenlightenment</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tlbflush</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>ipi</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>avic</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>emsr_bitmap</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>xmm_input</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <defaults>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <spinlocks>4095</spinlocks>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <stimer_direct>on</stimer_direct>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </defaults>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </hyperv>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <launchSecurity supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </features>
Jan 27 22:34:37 compute-0 nova_compute[184735]: </domainCapabilities>
Jan 27 22:34:37 compute-0 nova_compute[184735]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 22:34:37 compute-0 nova_compute[184735]: 2026-01-27 22:34:37.052 184739 DEBUG nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 27 22:34:37 compute-0 nova_compute[184735]: <domainCapabilities>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <domain>kvm</domain>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <arch>i686</arch>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <vcpu max='4096'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <iothreads supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <os supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <enum name='firmware'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <loader supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>rom</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pflash</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='readonly'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>yes</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>no</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='secure'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>no</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </loader>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </os>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <cpu>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='host-passthrough' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='hostPassthroughMigratable'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>on</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>off</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='maximum' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='maximumMigratable'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>on</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>off</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='host-model' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <vendor>AMD</vendor>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='x2apic'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='hypervisor'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='stibp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='ssbd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='overflow-recov'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='succor'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='lbrv'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='tsc-scale'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='flushbyasid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='pause-filter'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='pfthreshold'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='disable' name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='custom' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 systemd[1]: Stopping nova_compute container...
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='ClearwaterForest'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ddpd-u'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sha512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm3'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='ClearwaterForest-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ddpd-u'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sha512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm3'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cooperlake'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cooperlake-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cooperlake-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Dhyana-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Genoa'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='perfmon-v2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Turin'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='perfmon-v2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbpb'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Turin-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='perfmon-v2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbpb'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-128'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-256'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-128'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-256'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v6'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v7'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='KnightsMill'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512er'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512pf'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='KnightsMill-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512er'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512pf'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G4-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tbm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G5-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tbm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='athlon'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='athlon-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='core2duo'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='core2duo-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='coreduo'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='coreduo-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='n270'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='n270-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='phenom'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='phenom-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </cpu>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <memoryBacking supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <enum name='sourceType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>file</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>anonymous</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>memfd</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </memoryBacking>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <devices>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <disk supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='diskDevice'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>disk</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>cdrom</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>floppy</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>lun</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='bus'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>fdc</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>scsi</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>usb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>sata</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-non-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </disk>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <graphics supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vnc</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>egl-headless</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>dbus</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </graphics>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <video supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='modelType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vga</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>cirrus</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>none</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>bochs</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>ramfb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </video>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <hostdev supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='mode'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>subsystem</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='startupPolicy'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>default</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>mandatory</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>requisite</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>optional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='subsysType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>usb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pci</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>scsi</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='capsType'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='pciBackend'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </hostdev>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <rng supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-non-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendModel'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>random</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>egd</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>builtin</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </rng>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <filesystem supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='driverType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>path</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>handle</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtiofs</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </filesystem>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <tpm supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tpm-tis</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tpm-crb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendModel'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>emulator</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>external</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendVersion'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>2.0</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </tpm>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <redirdev supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='bus'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>usb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </redirdev>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <channel supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pty</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>unix</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </channel>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <crypto supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>qemu</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendModel'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>builtin</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </crypto>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <interface supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>default</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>passt</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </interface>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <panic supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>isa</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>hyperv</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </panic>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <console supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>null</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vc</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pty</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>dev</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>file</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pipe</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>stdio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>udp</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tcp</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>unix</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>qemu-vdagent</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>dbus</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </console>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </devices>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <features>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <gic supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <vmcoreinfo supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <genid supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <backingStoreInput supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <backup supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <async-teardown supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <s390-pv supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <ps2 supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <tdx supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <sev supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <sgx supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <hyperv supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='features'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>relaxed</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vapic</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>spinlocks</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vpindex</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>runtime</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>synic</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>stimer</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>reset</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vendor_id</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>frequencies</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>reenlightenment</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tlbflush</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>ipi</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>avic</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>emsr_bitmap</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>xmm_input</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <defaults>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <spinlocks>4095</spinlocks>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <stimer_direct>on</stimer_direct>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </defaults>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </hyperv>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <launchSecurity supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </features>
Jan 27 22:34:37 compute-0 nova_compute[184735]: </domainCapabilities>
Jan 27 22:34:37 compute-0 nova_compute[184735]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 22:34:37 compute-0 nova_compute[184735]: 2026-01-27 22:34:37.133 184739 DEBUG nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 22:34:37 compute-0 nova_compute[184735]: 2026-01-27 22:34:37.137 184739 DEBUG nova.virt.libvirt.host [None req-8fc64d03-59c5-4199-8cd1-41b4eb81ed4a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 27 22:34:37 compute-0 nova_compute[184735]: <domainCapabilities>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <domain>kvm</domain>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <arch>x86_64</arch>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <vcpu max='240'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <iothreads supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <os supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <enum name='firmware'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <loader supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>rom</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pflash</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='readonly'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>yes</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>no</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='secure'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>no</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </loader>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </os>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <cpu>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='host-passthrough' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='hostPassthroughMigratable'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>on</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>off</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='maximum' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='maximumMigratable'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>on</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>off</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='host-model' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <vendor>AMD</vendor>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='x2apic'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='hypervisor'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='stibp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='ssbd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='overflow-recov'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='succor'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='lbrv'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='tsc-scale'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='flushbyasid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='pause-filter'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='pfthreshold'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <feature policy='disable' name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <mode name='custom' supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Broadwell-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='ClearwaterForest'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ddpd-u'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sha512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm3'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='ClearwaterForest-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ddpd-u'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sha512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm3'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sm4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cooperlake'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cooperlake-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Cooperlake-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Denverton-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Dhyana-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Genoa'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='perfmon-v2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Milan-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Rome-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Turin'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='perfmon-v2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbpb'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-Turin-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amd-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='auto-ibrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='perfmon-v2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbpb'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='stibp-always-on'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='EPYC-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-128'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-256'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='GraniteRapids-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-128'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-256'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx10-512'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='prefetchiti'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Haswell-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v6'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Icelake-Server-v7'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='IvyBridge-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='KnightsMill'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512er'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512pf'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='KnightsMill-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512er'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512pf'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G4-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tbm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Opteron_G5-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fma4'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tbm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xop'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SapphireRapids-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='amx-tile'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-bf16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-fp16'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bitalg'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrc'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fzrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='la57'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='taa-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='SierraForest-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ifma'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cmpccxadd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fbsdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='fsrs'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ibrs-all'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='intel-psfd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='lam'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mcdt-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pbrsb-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='psdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='serialize'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vaes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Client-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='hle'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='rtm'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Skylake-Server-v5'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512bw'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512cd'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512dq'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512f'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='avx512vl'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='invpcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pcid'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='pku'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='mpx'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v2'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v3'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='core-capability'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='split-lock-detect'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='Snowridge-v4'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='cldemote'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='erms'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='gfni'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdir64b'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='movdiri'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='xsaves'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='athlon'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='athlon-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='core2duo'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='core2duo-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='coreduo'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='coreduo-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='n270'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='n270-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='ss'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='phenom'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <blockers model='phenom-v1'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnow'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <feature name='3dnowext'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </blockers>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </mode>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </cpu>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <memoryBacking supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <enum name='sourceType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>file</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>anonymous</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <value>memfd</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </memoryBacking>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <devices>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <disk supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='diskDevice'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>disk</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>cdrom</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>floppy</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>lun</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='bus'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>ide</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>fdc</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>scsi</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>usb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>sata</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-non-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </disk>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <graphics supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vnc</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>egl-headless</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>dbus</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </graphics>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <video supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='modelType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vga</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>cirrus</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>none</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>bochs</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>ramfb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </video>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <hostdev supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='mode'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>subsystem</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='startupPolicy'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>default</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>mandatory</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>requisite</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>optional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='subsysType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>usb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pci</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>scsi</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='capsType'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='pciBackend'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </hostdev>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <rng supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtio-non-transitional</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendModel'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>random</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>egd</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>builtin</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </rng>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <filesystem supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='driverType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>path</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>handle</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>virtiofs</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </filesystem>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <tpm supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tpm-tis</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tpm-crb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendModel'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>emulator</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>external</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendVersion'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>2.0</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </tpm>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <redirdev supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='bus'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>usb</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </redirdev>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <channel supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pty</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>unix</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </channel>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <crypto supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>qemu</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendModel'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>builtin</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </crypto>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <interface supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='backendType'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>default</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>passt</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </interface>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <panic supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='model'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>isa</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>hyperv</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </panic>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <console supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='type'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>null</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vc</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pty</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>dev</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>file</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>pipe</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>stdio</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>udp</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tcp</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>unix</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>qemu-vdagent</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>dbus</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </console>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </devices>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   <features>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <gic supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <vmcoreinfo supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <genid supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <backingStoreInput supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <backup supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <async-teardown supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <s390-pv supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <ps2 supported='yes'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <tdx supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <sev supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <sgx supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <hyperv supported='yes'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <enum name='features'>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>relaxed</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vapic</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>spinlocks</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vpindex</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>runtime</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>synic</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>stimer</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>reset</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>vendor_id</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>frequencies</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>reenlightenment</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>tlbflush</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>ipi</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>avic</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>emsr_bitmap</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <value>xmm_input</value>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </enum>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       <defaults>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <spinlocks>4095</spinlocks>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <stimer_direct>on</stimer_direct>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 22:34:37 compute-0 nova_compute[184735]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 22:34:37 compute-0 nova_compute[184735]:       </defaults>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     </hyperv>
Jan 27 22:34:37 compute-0 nova_compute[184735]:     <launchSecurity supported='no'/>
Jan 27 22:34:37 compute-0 nova_compute[184735]:   </features>
Jan 27 22:34:37 compute-0 nova_compute[184735]: </domainCapabilities>
Jan 27 22:34:37 compute-0 nova_compute[184735]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 22:34:37 compute-0 nova_compute[184735]: 2026-01-27 22:34:37.241 184739 DEBUG oslo_concurrency.lockutils [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:34:37 compute-0 nova_compute[184735]: 2026-01-27 22:34:37.241 184739 DEBUG oslo_concurrency.lockutils [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:34:37 compute-0 nova_compute[184735]: 2026-01-27 22:34:37.242 184739 DEBUG oslo_concurrency.lockutils [None req-cb4096d7-1947-400a-8981-830d1684be39 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:34:37 compute-0 virtqemud[185375]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 27 22:34:37 compute-0 virtqemud[185375]: hostname: compute-0
Jan 27 22:34:37 compute-0 virtqemud[185375]: End of file while reading data: Input/output error
Jan 27 22:34:37 compute-0 systemd[1]: libpod-da103180c47380ff29c2682126100d6e6038082b4b04cb558542dfdf6b659154.scope: Deactivated successfully.
Jan 27 22:34:37 compute-0 systemd[1]: libpod-da103180c47380ff29c2682126100d6e6038082b4b04cb558542dfdf6b659154.scope: Consumed 2.896s CPU time.
Jan 27 22:34:37 compute-0 podman[185595]: 2026-01-27 22:34:37.589928202 +0000 UTC m=+0.425329053 container died da103180c47380ff29c2682126100d6e6038082b4b04cb558542dfdf6b659154 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da103180c47380ff29c2682126100d6e6038082b4b04cb558542dfdf6b659154-userdata-shm.mount: Deactivated successfully.
Jan 27 22:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-60631a951d81501126afac98b20add501780c15b5f3709bfec3aa7901b3d86af-merged.mount: Deactivated successfully.
Jan 27 22:34:37 compute-0 podman[185595]: 2026-01-27 22:34:37.634324791 +0000 UTC m=+0.469725622 container cleanup da103180c47380ff29c2682126100d6e6038082b4b04cb558542dfdf6b659154 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 22:34:37 compute-0 podman[185595]: nova_compute
Jan 27 22:34:37 compute-0 podman[185622]: nova_compute
Jan 27 22:34:37 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 27 22:34:37 compute-0 systemd[1]: Stopped nova_compute container.
Jan 27 22:34:37 compute-0 systemd[1]: Starting nova_compute container...
Jan 27 22:34:37 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60631a951d81501126afac98b20add501780c15b5f3709bfec3aa7901b3d86af/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60631a951d81501126afac98b20add501780c15b5f3709bfec3aa7901b3d86af/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60631a951d81501126afac98b20add501780c15b5f3709bfec3aa7901b3d86af/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60631a951d81501126afac98b20add501780c15b5f3709bfec3aa7901b3d86af/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60631a951d81501126afac98b20add501780c15b5f3709bfec3aa7901b3d86af/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:37 compute-0 podman[185635]: 2026-01-27 22:34:37.79329896 +0000 UTC m=+0.069829719 container init da103180c47380ff29c2682126100d6e6038082b4b04cb558542dfdf6b659154 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 22:34:37 compute-0 podman[185635]: 2026-01-27 22:34:37.798161944 +0000 UTC m=+0.074692673 container start da103180c47380ff29c2682126100d6e6038082b4b04cb558542dfdf6b659154 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 27 22:34:37 compute-0 podman[185635]: nova_compute
Jan 27 22:34:37 compute-0 nova_compute[185650]: + sudo -E kolla_set_configs
Jan 27 22:34:37 compute-0 systemd[1]: Started nova_compute container.
Jan 27 22:34:37 compute-0 sudo[185585]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Validating config file
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Copying service configuration files
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Deleting /etc/ceph
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Creating directory /etc/ceph
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /etc/ceph
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Writing out command to execute
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 27 22:34:37 compute-0 nova_compute[185650]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 27 22:34:37 compute-0 nova_compute[185650]: ++ cat /run_command
Jan 27 22:34:37 compute-0 nova_compute[185650]: + CMD=nova-compute
Jan 27 22:34:37 compute-0 nova_compute[185650]: + ARGS=
Jan 27 22:34:37 compute-0 nova_compute[185650]: + sudo kolla_copy_cacerts
Jan 27 22:34:37 compute-0 nova_compute[185650]: + [[ ! -n '' ]]
Jan 27 22:34:37 compute-0 nova_compute[185650]: + . kolla_extend_start
Jan 27 22:34:37 compute-0 nova_compute[185650]: Running command: 'nova-compute'
Jan 27 22:34:37 compute-0 nova_compute[185650]: + echo 'Running command: '\''nova-compute'\'''
Jan 27 22:34:37 compute-0 nova_compute[185650]: + umask 0022
Jan 27 22:34:37 compute-0 nova_compute[185650]: + exec nova-compute
Jan 27 22:34:38 compute-0 sudo[185827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bexhytqjrzbryvtuztvmnryqpdtcpdwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553278.0374434-1287-189143033981713/AnsiballZ_podman_container.py'
Jan 27 22:34:38 compute-0 podman[185785]: 2026-01-27 22:34:38.343981505 +0000 UTC m=+0.058019169 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 22:34:38 compute-0 sudo[185827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:38 compute-0 python3.9[185833]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 27 22:34:38 compute-0 systemd[1]: Started libpod-conmon-fa6e890ad5d53a78eca9b3facb1f6726bccdc9766984baf66345503133ef5412.scope.
Jan 27 22:34:38 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c33db53c5c24a8012618aab8414284f129539a777891c728e0a9a327a6f953/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c33db53c5c24a8012618aab8414284f129539a777891c728e0a9a327a6f953/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c33db53c5c24a8012618aab8414284f129539a777891c728e0a9a327a6f953/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 27 22:34:38 compute-0 podman[185857]: 2026-01-27 22:34:38.84288042 +0000 UTC m=+0.139131394 container init fa6e890ad5d53a78eca9b3facb1f6726bccdc9766984baf66345503133ef5412 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 22:34:38 compute-0 podman[185857]: 2026-01-27 22:34:38.8519121 +0000 UTC m=+0.148163064 container start fa6e890ad5d53a78eca9b3facb1f6726bccdc9766984baf66345503133ef5412 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Jan 27 22:34:38 compute-0 python3.9[185833]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Applying nova statedir ownership
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 27 22:34:38 compute-0 nova_compute_init[185879]: INFO:nova_statedir:Nova statedir ownership complete
Jan 27 22:34:38 compute-0 systemd[1]: libpod-fa6e890ad5d53a78eca9b3facb1f6726bccdc9766984baf66345503133ef5412.scope: Deactivated successfully.
Jan 27 22:34:38 compute-0 podman[185880]: 2026-01-27 22:34:38.926424838 +0000 UTC m=+0.040125604 container died fa6e890ad5d53a78eca9b3facb1f6726bccdc9766984baf66345503133ef5412 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa6e890ad5d53a78eca9b3facb1f6726bccdc9766984baf66345503133ef5412-userdata-shm.mount: Deactivated successfully.
Jan 27 22:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3c33db53c5c24a8012618aab8414284f129539a777891c728e0a9a327a6f953-merged.mount: Deactivated successfully.
Jan 27 22:34:38 compute-0 podman[185890]: 2026-01-27 22:34:38.969400492 +0000 UTC m=+0.052209991 container cleanup fa6e890ad5d53a78eca9b3facb1f6726bccdc9766984baf66345503133ef5412 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 27 22:34:38 compute-0 systemd[1]: libpod-conmon-fa6e890ad5d53a78eca9b3facb1f6726bccdc9766984baf66345503133ef5412.scope: Deactivated successfully.
Jan 27 22:34:39 compute-0 sudo[185827]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:39 compute-0 sshd-session[162629]: Connection closed by 192.168.122.30 port 54354
Jan 27 22:34:39 compute-0 sshd-session[162626]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:34:39 compute-0 systemd-logind[789]: Session 23 logged out. Waiting for processes to exit.
Jan 27 22:34:39 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 27 22:34:39 compute-0 systemd[1]: session-23.scope: Consumed 1min 35.441s CPU time.
Jan 27 22:34:39 compute-0 systemd-logind[789]: Removed session 23.
Jan 27 22:34:39 compute-0 nova_compute[185650]: 2026-01-27 22:34:39.833 185654 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 22:34:39 compute-0 nova_compute[185650]: 2026-01-27 22:34:39.833 185654 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 22:34:39 compute-0 nova_compute[185650]: 2026-01-27 22:34:39.833 185654 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 27 22:34:39 compute-0 nova_compute[185650]: 2026-01-27 22:34:39.834 185654 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 27 22:34:39 compute-0 nova_compute[185650]: 2026-01-27 22:34:39.963 185654 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:34:39 compute-0 nova_compute[185650]: 2026-01-27 22:34:39.983 185654 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:34:39 compute-0 nova_compute[185650]: 2026-01-27 22:34:39.983 185654 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.446 185654 INFO nova.virt.driver [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.579 185654 INFO nova.compute.provider_config [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.594 185654 DEBUG oslo_concurrency.lockutils [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.594 185654 DEBUG oslo_concurrency.lockutils [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.595 185654 DEBUG oslo_concurrency.lockutils [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.595 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.595 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.595 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.595 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.596 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.596 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.596 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.596 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.596 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.596 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.596 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.597 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.597 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.597 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.597 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.597 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.597 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.598 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.598 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.598 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.598 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.598 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.598 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.598 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.599 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.599 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.599 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.599 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.599 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.599 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.600 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.600 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.600 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.600 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.600 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.600 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.600 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.601 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.601 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.601 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.601 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.601 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.601 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.602 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.602 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.602 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.602 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.602 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.602 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.602 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.603 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.603 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.603 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.603 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.603 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.603 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.603 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.604 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.604 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.604 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.604 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.604 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.604 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.604 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.605 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.605 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.605 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.605 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.605 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.605 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.605 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.606 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.606 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.606 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.606 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.606 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.606 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.606 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.607 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.607 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.607 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.607 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.607 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.607 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.608 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.608 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.608 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.608 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.608 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.608 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.608 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.609 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.609 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.609 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.609 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.609 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.609 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.609 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.610 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.610 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.610 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.610 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.610 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.610 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.610 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.611 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.611 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.611 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.611 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.611 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.611 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.611 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.612 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.612 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.612 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.612 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.612 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.612 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.612 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.612 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.613 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.613 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.613 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.613 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.613 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.613 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.613 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.614 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.614 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.614 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.614 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.614 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.614 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.614 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.615 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.615 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.615 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.615 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.615 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.615 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.615 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.616 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.616 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.616 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.616 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.616 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.616 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.616 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.617 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.617 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.617 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.617 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.617 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.617 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.617 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.618 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.618 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.618 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.618 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.618 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.618 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.619 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.619 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.619 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.619 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.619 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.619 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.619 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.620 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.620 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.620 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.620 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.620 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.620 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.620 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.621 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.621 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.621 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.621 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.621 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.621 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.622 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.622 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.622 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.622 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.622 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.622 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.622 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.623 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.623 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.623 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.623 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.623 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.623 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.623 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.624 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.624 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.624 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.624 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.624 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.624 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.624 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.625 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.625 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.625 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.625 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.625 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.625 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.626 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.626 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.626 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.626 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.626 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.626 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.627 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.627 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.627 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.627 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.627 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.627 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.627 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.628 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.628 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.628 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.628 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.628 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.628 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.628 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.629 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.629 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.629 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.629 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.629 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.629 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.629 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.629 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.630 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.630 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.630 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.630 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.630 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.630 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.631 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.631 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.631 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.631 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.631 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.631 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.631 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.632 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.632 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.632 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.632 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.632 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.632 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.633 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.633 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.633 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.633 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.633 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.633 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.634 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.634 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.634 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.634 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.634 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.634 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.634 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.635 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.635 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.635 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.635 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.635 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.635 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.635 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.636 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.636 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.636 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.636 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.636 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.636 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.637 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.637 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.637 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.637 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.637 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.637 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.637 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.638 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.638 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.638 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.638 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.638 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.638 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.638 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.639 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.639 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.639 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.639 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.639 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.639 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.639 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.640 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.640 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.640 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.640 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.640 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.640 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.641 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.641 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.641 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.641 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.641 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.641 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.641 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.642 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.642 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.642 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.642 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.642 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.642 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.642 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.643 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.643 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.643 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.643 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.643 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.643 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.644 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.644 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.644 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.644 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.644 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.644 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.644 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.645 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.645 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.645 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.645 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.645 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.645 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.645 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.646 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.646 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.646 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.646 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.646 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.646 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.646 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.647 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.647 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.647 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.647 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.647 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.647 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.648 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.648 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.648 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.648 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.648 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.648 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.649 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.649 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.649 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.649 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.649 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.649 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.649 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.650 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.650 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.650 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.650 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.650 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.650 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.650 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.651 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.651 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.651 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.651 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.651 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.651 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.651 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.652 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.652 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.652 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.652 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.652 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.652 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.652 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.653 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.653 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.653 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.653 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.653 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.653 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.653 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.654 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.654 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.654 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.654 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.654 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.654 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.655 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.655 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.655 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.655 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.655 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.656 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.656 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.656 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.656 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.656 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.657 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.657 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.657 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.657 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.657 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.658 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.658 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.658 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.658 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.659 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.659 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.659 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.659 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.659 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.660 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.660 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.660 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.660 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.661 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.661 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.661 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.661 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.661 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.662 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.662 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.662 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.662 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.662 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.663 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.663 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.663 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.663 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.663 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.664 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.664 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.664 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.664 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.665 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.665 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.665 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.665 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.665 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.666 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.666 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.666 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.666 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.667 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.667 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.667 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.667 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.667 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.668 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.668 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.668 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.668 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.668 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.669 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.669 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.669 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.669 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.670 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.670 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.670 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.670 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.671 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.671 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.671 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.671 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.671 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.672 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.672 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.672 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.672 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.673 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.673 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.673 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.673 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.673 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.674 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.674 185654 WARNING oslo_config.cfg [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 27 22:34:40 compute-0 nova_compute[185650]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 27 22:34:40 compute-0 nova_compute[185650]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 27 22:34:40 compute-0 nova_compute[185650]: and ``live_migration_inbound_addr`` respectively.
Jan 27 22:34:40 compute-0 nova_compute[185650]: ).  Its value may be silently ignored in the future.
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.674 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.675 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.675 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.676 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.676 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.676 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.676 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.677 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.677 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.677 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.677 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.677 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.678 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.678 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.678 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.678 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.679 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.679 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.679 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.679 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.679 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.679 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.679 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.680 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.680 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.680 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.680 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.680 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.680 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.681 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.681 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.681 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.681 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.681 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.681 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.682 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.682 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.682 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.682 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.682 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.682 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.683 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.683 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.683 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.683 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.683 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.683 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.683 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.684 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.684 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.684 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.684 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.684 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.684 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.684 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.685 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.685 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.685 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.685 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.685 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.685 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.685 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.686 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.686 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.686 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.686 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.686 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.686 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.687 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.687 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.687 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.687 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.687 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.687 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.687 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.687 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.688 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.688 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.688 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.688 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.688 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.688 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.689 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.689 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.689 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.689 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.689 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.689 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.689 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.690 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.690 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.690 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.690 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.690 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.690 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.691 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.691 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.691 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.691 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.691 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.691 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.691 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.692 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.692 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.692 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.692 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.692 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.692 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.692 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.692 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.693 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.693 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.693 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.693 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.693 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.693 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.693 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.694 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.694 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.694 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.694 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.694 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.694 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.695 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.695 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.695 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.695 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.695 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.695 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.695 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.696 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.696 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.696 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.696 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.696 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.696 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.696 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.697 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.697 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.697 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.697 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.697 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.697 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.698 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.698 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.698 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.698 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.698 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.698 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.698 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.699 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.699 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.699 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.699 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.699 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.699 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.699 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.700 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.700 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.700 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.700 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.700 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.700 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.700 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.701 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.701 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.701 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.701 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.701 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.701 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.701 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.702 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.702 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.702 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.702 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.702 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.702 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.702 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.703 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.703 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.703 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.703 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.703 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.703 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.703 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.704 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.704 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.704 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.704 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.704 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.704 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.704 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.705 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.705 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.705 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.705 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.705 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.705 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.706 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.706 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.706 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.706 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.706 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.706 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.706 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.707 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.707 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.707 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.707 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.707 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.707 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.707 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.707 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.708 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.708 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.708 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.708 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.708 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.708 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.708 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.709 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.709 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.709 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.709 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.709 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.709 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.709 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.710 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.710 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.710 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.710 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.710 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.710 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.710 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.711 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.711 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.711 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.711 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.711 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.711 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.711 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.712 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.712 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.712 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.712 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.712 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.712 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.712 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.713 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.713 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.713 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.713 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.713 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.713 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.714 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.714 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.714 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.714 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.714 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.714 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.714 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.715 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.715 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.715 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.715 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.715 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.715 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.715 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.716 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.716 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.716 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.716 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.716 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.716 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.716 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.717 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.717 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.717 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.717 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.717 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.717 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.717 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.718 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.718 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.718 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.718 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.718 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.718 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.718 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.719 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.719 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.719 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.719 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.719 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.719 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.720 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.720 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.720 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.720 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.720 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.720 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.720 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.721 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.721 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.721 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.721 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.721 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.722 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.722 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.722 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.722 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.722 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.722 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.722 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.723 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.723 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.723 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.723 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.723 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.724 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.724 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.724 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.724 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.724 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.724 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.725 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.725 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.725 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.725 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.725 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.725 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.726 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.726 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.726 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.726 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.726 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.726 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.727 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.727 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.727 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.727 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.727 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.727 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.728 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.728 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.728 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.728 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.728 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.728 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.729 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.729 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.729 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.729 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.729 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.729 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.730 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.730 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.730 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.730 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.730 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.730 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.731 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.731 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.731 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.731 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.731 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.731 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.732 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.732 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.732 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.732 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.732 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.733 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.733 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.733 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.733 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.733 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.733 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.734 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.734 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.734 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.734 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.734 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.734 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.735 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.735 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.735 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.735 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.735 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.735 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.736 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.736 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.736 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.736 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.736 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.736 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.737 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.737 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.737 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.737 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.737 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.737 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.738 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.738 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.738 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.738 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.738 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.739 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.739 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.739 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.739 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.739 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.740 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.740 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.740 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.740 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.740 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.741 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.741 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.741 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.741 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.741 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.741 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.741 185654 DEBUG oslo_service.service [None req-eafd9e6e-67dd-44a8-93c0-e47fba4e4b84 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.743 185654 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.757 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.758 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.758 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.759 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.771 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f69907be130> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.779 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f69907be130> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.780 185654 INFO nova.virt.libvirt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Connection event '1' reason 'None'
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.786 185654 INFO nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Libvirt host capabilities <capabilities>
Jan 27 22:34:40 compute-0 nova_compute[185650]: 
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <host>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <uuid>3d6f6630-1343-4c09-b459-1f5514c0a933</uuid>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <cpu>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <arch>x86_64</arch>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model>EPYC-Rome-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <vendor>AMD</vendor>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <microcode version='16777317'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <signature family='23' model='49' stepping='0'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='x2apic'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='tsc-deadline'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='osxsave'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='hypervisor'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='tsc_adjust'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='spec-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='stibp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='arch-capabilities'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='ssbd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='cmp_legacy'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='topoext'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='virt-ssbd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='lbrv'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='tsc-scale'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='vmcb-clean'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='pause-filter'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='pfthreshold'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='svme-addr-chk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='rdctl-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='skip-l1dfl-vmentry'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='mds-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature name='pschange-mc-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <pages unit='KiB' size='4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <pages unit='KiB' size='2048'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <pages unit='KiB' size='1048576'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </cpu>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <power_management>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <suspend_mem/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <suspend_disk/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <suspend_hybrid/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </power_management>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <iommu support='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <migration_features>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <live/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <uri_transports>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <uri_transport>tcp</uri_transport>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <uri_transport>rdma</uri_transport>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </uri_transports>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </migration_features>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <topology>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <cells num='1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <cell id='0'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:           <memory unit='KiB'>7864308</memory>
Jan 27 22:34:40 compute-0 nova_compute[185650]:           <pages unit='KiB' size='4'>1966077</pages>
Jan 27 22:34:40 compute-0 nova_compute[185650]:           <pages unit='KiB' size='2048'>0</pages>
Jan 27 22:34:40 compute-0 nova_compute[185650]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 27 22:34:40 compute-0 nova_compute[185650]:           <distances>
Jan 27 22:34:40 compute-0 nova_compute[185650]:             <sibling id='0' value='10'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:           </distances>
Jan 27 22:34:40 compute-0 nova_compute[185650]:           <cpus num='8'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:           </cpus>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         </cell>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </cells>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </topology>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <cache>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </cache>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <secmodel>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model>selinux</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <doi>0</doi>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </secmodel>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <secmodel>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model>dac</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <doi>0</doi>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </secmodel>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </host>
Jan 27 22:34:40 compute-0 nova_compute[185650]: 
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <guest>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <os_type>hvm</os_type>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <arch name='i686'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <wordsize>32</wordsize>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <domain type='qemu'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <domain type='kvm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </arch>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <features>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <pae/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <nonpae/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <acpi default='on' toggle='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <apic default='on' toggle='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <cpuselection/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <deviceboot/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <disksnapshot default='on' toggle='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <externalSnapshot/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </features>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </guest>
Jan 27 22:34:40 compute-0 nova_compute[185650]: 
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <guest>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <os_type>hvm</os_type>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <arch name='x86_64'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <wordsize>64</wordsize>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <domain type='qemu'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <domain type='kvm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </arch>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <features>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <acpi default='on' toggle='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <apic default='on' toggle='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <cpuselection/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <deviceboot/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <disksnapshot default='on' toggle='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <externalSnapshot/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </features>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </guest>
Jan 27 22:34:40 compute-0 nova_compute[185650]: 
Jan 27 22:34:40 compute-0 nova_compute[185650]: </capabilities>
Jan 27 22:34:40 compute-0 nova_compute[185650]: 
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.791 185654 WARNING nova.virt.libvirt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.792 185654 DEBUG nova.virt.libvirt.volume.mount [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.794 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.798 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 27 22:34:40 compute-0 nova_compute[185650]: <domainCapabilities>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <domain>kvm</domain>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <arch>i686</arch>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <vcpu max='240'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <iothreads supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <os supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <enum name='firmware'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <loader supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>rom</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>pflash</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='readonly'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>yes</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>no</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='secure'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>no</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </loader>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </os>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <cpu>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='host-passthrough' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='hostPassthroughMigratable'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>on</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>off</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='maximum' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='maximumMigratable'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>on</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>off</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='host-model' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <vendor>AMD</vendor>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='x2apic'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='hypervisor'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='stibp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='ssbd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='overflow-recov'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='succor'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='lbrv'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc-scale'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='flushbyasid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='pause-filter'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='pfthreshold'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='disable' name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='custom' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='ClearwaterForest'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ddpd-u'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sha512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm3'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='ClearwaterForest-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ddpd-u'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sha512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm3'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cooperlake'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cooperlake-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cooperlake-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Dhyana-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Turin'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbpb'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Turin-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbpb'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-v5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-128'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-256'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-128'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-256'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v6'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v7'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='KnightsMill'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512er'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512pf'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='KnightsMill-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512er'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512pf'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G4-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tbm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G5-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tbm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SierraForest'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Snowridge'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='athlon'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='athlon-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='core2duo'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='core2duo-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='coreduo'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='coreduo-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='n270'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='n270-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='phenom'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='phenom-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </cpu>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <memoryBacking supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <enum name='sourceType'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <value>file</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <value>anonymous</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <value>memfd</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </memoryBacking>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <devices>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <disk supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='diskDevice'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>disk</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>cdrom</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>floppy</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>lun</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='bus'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>ide</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>fdc</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>scsi</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>sata</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio-transitional</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio-non-transitional</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <graphics supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vnc</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>egl-headless</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>dbus</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </graphics>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <video supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='modelType'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vga</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>cirrus</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>none</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>bochs</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>ramfb</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </video>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <hostdev supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='mode'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>subsystem</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='startupPolicy'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>default</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>mandatory</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>requisite</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>optional</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='subsysType'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>pci</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>scsi</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='capsType'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='pciBackend'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </hostdev>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <rng supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio-transitional</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio-non-transitional</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>random</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>egd</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>builtin</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </rng>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <filesystem supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='driverType'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>path</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>handle</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtiofs</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </filesystem>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <tpm supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>tpm-tis</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>tpm-crb</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>emulator</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>external</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='backendVersion'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>2.0</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </tpm>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <redirdev supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='bus'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </redirdev>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <channel supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>pty</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>unix</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </channel>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <crypto supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='model'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>qemu</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>builtin</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </crypto>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <interface supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='backendType'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>default</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>passt</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </interface>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <panic supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>isa</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>hyperv</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </panic>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <console supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>null</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vc</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>pty</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>dev</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>file</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>pipe</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>stdio</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>udp</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>tcp</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>unix</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>qemu-vdagent</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>dbus</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </console>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </devices>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <features>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <gic supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <vmcoreinfo supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <genid supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <backingStoreInput supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <backup supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <async-teardown supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <s390-pv supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <ps2 supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <tdx supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <sev supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <sgx supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <hyperv supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='features'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>relaxed</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vapic</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>spinlocks</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vpindex</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>runtime</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>synic</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>stimer</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>reset</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vendor_id</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>frequencies</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>reenlightenment</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>tlbflush</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>ipi</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>avic</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>emsr_bitmap</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>xmm_input</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <defaults>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <spinlocks>4095</spinlocks>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <stimer_direct>on</stimer_direct>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </defaults>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </hyperv>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <launchSecurity supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </features>
Jan 27 22:34:40 compute-0 nova_compute[185650]: </domainCapabilities>
Jan 27 22:34:40 compute-0 nova_compute[185650]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.805 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 27 22:34:40 compute-0 nova_compute[185650]: <domainCapabilities>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <domain>kvm</domain>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <arch>i686</arch>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <vcpu max='4096'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <iothreads supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <os supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <enum name='firmware'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <loader supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>rom</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>pflash</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='readonly'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>yes</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>no</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='secure'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>no</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </loader>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </os>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <cpu>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='host-passthrough' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='hostPassthroughMigratable'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>on</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>off</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='maximum' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='maximumMigratable'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>on</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>off</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='host-model' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <vendor>AMD</vendor>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='x2apic'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='hypervisor'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='stibp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='ssbd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='overflow-recov'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='succor'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='lbrv'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc-scale'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='flushbyasid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='pause-filter'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='pfthreshold'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='disable' name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='custom' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='ClearwaterForest'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ddpd-u'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sha512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm3'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='ClearwaterForest-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ddpd-u'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sha512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm3'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cooperlake'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cooperlake-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cooperlake-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Dhyana-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Turin'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbpb'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Turin-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbpb'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-v5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-128'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-256'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-128'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-256'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v6'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v7'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='KnightsMill'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512er'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512pf'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='KnightsMill-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512er'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512pf'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G4-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tbm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G5-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tbm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SierraForest'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Snowridge'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='athlon'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='athlon-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='core2duo'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='core2duo-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='coreduo'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='coreduo-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='n270'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='n270-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='phenom'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='phenom-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </cpu>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <memoryBacking supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <enum name='sourceType'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <value>file</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <value>anonymous</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <value>memfd</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </memoryBacking>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <devices>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <disk supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='diskDevice'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>disk</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>cdrom</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>floppy</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>lun</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='bus'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>fdc</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>scsi</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>sata</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio-transitional</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio-non-transitional</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <graphics supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vnc</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>egl-headless</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>dbus</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </graphics>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <video supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='modelType'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vga</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>cirrus</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>none</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>bochs</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>ramfb</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </video>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <hostdev supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='mode'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>subsystem</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='startupPolicy'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>default</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>mandatory</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>requisite</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>optional</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='subsysType'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>pci</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>scsi</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='capsType'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='pciBackend'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </hostdev>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <rng supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio-transitional</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtio-non-transitional</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>random</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>egd</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>builtin</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </rng>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <filesystem supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='driverType'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>path</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>handle</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>virtiofs</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </filesystem>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <tpm supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>tpm-tis</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>tpm-crb</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>emulator</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>external</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='backendVersion'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>2.0</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </tpm>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <redirdev supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='bus'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </redirdev>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <channel supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>pty</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>unix</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </channel>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <crypto supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='model'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>qemu</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>builtin</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </crypto>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <interface supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='backendType'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>default</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>passt</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </interface>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <panic supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>isa</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>hyperv</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </panic>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <console supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>null</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vc</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>pty</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>dev</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>file</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>pipe</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>stdio</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>udp</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>tcp</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>unix</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>qemu-vdagent</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>dbus</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </console>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </devices>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <features>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <gic supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <vmcoreinfo supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <genid supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <backingStoreInput supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <backup supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <async-teardown supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <s390-pv supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <ps2 supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <tdx supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <sev supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <sgx supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <hyperv supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='features'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>relaxed</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vapic</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>spinlocks</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vpindex</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>runtime</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>synic</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>stimer</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>reset</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>vendor_id</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>frequencies</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>reenlightenment</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>tlbflush</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>ipi</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>avic</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>emsr_bitmap</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>xmm_input</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <defaults>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <spinlocks>4095</spinlocks>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <stimer_direct>on</stimer_direct>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </defaults>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </hyperv>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <launchSecurity supported='no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </features>
Jan 27 22:34:40 compute-0 nova_compute[185650]: </domainCapabilities>
Jan 27 22:34:40 compute-0 nova_compute[185650]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.871 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 27 22:34:40 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.876 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 27 22:34:40 compute-0 nova_compute[185650]: <domainCapabilities>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <domain>kvm</domain>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <arch>x86_64</arch>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <vcpu max='240'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <iothreads supported='yes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <os supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <enum name='firmware'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <loader supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>rom</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>pflash</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='readonly'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>yes</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>no</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='secure'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>no</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </loader>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   </os>
Jan 27 22:34:40 compute-0 nova_compute[185650]:   <cpu>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='host-passthrough' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='hostPassthroughMigratable'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>on</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>off</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='maximum' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <enum name='maximumMigratable'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>on</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <value>off</value>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='host-model' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <vendor>AMD</vendor>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='x2apic'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='hypervisor'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='stibp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='ssbd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='overflow-recov'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='succor'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='lbrv'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc-scale'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='flushbyasid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='pause-filter'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='pfthreshold'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <feature policy='disable' name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:40 compute-0 nova_compute[185650]:     <mode name='custom' supported='yes'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='ClearwaterForest'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ddpd-u'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sha512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm3'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='ClearwaterForest-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bhi-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ddpd-u'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sha512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm3'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sm4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cooperlake'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cooperlake-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Cooperlake-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Denverton-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Dhyana-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Turin'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbpb'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-Turin-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbpb'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='EPYC-v5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-128'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-256'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-128'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-256'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx10-512'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Haswell-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v6'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v7'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-IBRS'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='KnightsMill'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512er'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512pf'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='KnightsMill-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512er'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512pf'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G4-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G5'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tbm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='Opteron_G5-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tbm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v1'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v2'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v3'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v4'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 22:34:40 compute-0 nova_compute[185650]:       <blockers model='SierraForest'>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:40 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v5'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Snowridge'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='athlon'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='athlon-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='core2duo'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='core2duo-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='coreduo'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='coreduo-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='n270'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='n270-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='phenom'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='phenom-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   </cpu>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <memoryBacking supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <enum name='sourceType'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <value>file</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <value>anonymous</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <value>memfd</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   </memoryBacking>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <devices>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <disk supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='diskDevice'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>disk</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>cdrom</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>floppy</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>lun</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='bus'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>ide</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>fdc</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>scsi</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>sata</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio-transitional</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio-non-transitional</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <graphics supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vnc</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>egl-headless</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>dbus</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </graphics>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <video supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='modelType'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vga</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>cirrus</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>none</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>bochs</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>ramfb</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </video>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <hostdev supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='mode'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>subsystem</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='startupPolicy'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>default</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>mandatory</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>requisite</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>optional</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='subsysType'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>pci</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>scsi</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='capsType'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='pciBackend'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </hostdev>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <rng supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio-transitional</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio-non-transitional</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>random</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>egd</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>builtin</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </rng>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <filesystem supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='driverType'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>path</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>handle</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtiofs</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </filesystem>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <tpm supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>tpm-tis</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>tpm-crb</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>emulator</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>external</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='backendVersion'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>2.0</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </tpm>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <redirdev supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='bus'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </redirdev>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <channel supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>pty</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>unix</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </channel>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <crypto supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='model'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>qemu</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>builtin</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </crypto>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <interface supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='backendType'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>default</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>passt</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </interface>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <panic supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>isa</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>hyperv</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </panic>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <console supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>null</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vc</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>pty</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>dev</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>file</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>pipe</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>stdio</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>udp</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>tcp</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>unix</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>qemu-vdagent</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>dbus</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </console>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   </devices>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <features>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <gic supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <vmcoreinfo supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <genid supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <backingStoreInput supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <backup supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <async-teardown supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <s390-pv supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <ps2 supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <tdx supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <sev supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <sgx supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <hyperv supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='features'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>relaxed</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vapic</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>spinlocks</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vpindex</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>runtime</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>synic</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>stimer</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>reset</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vendor_id</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>frequencies</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>reenlightenment</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>tlbflush</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>ipi</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>avic</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>emsr_bitmap</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>xmm_input</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <defaults>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <spinlocks>4095</spinlocks>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <stimer_direct>on</stimer_direct>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </defaults>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </hyperv>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <launchSecurity supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   </features>
Jan 27 22:34:41 compute-0 nova_compute[185650]: </domainCapabilities>
Jan 27 22:34:41 compute-0 nova_compute[185650]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:40.954 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 27 22:34:41 compute-0 nova_compute[185650]: <domainCapabilities>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <path>/usr/libexec/qemu-kvm</path>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <domain>kvm</domain>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <arch>x86_64</arch>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <vcpu max='4096'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <iothreads supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <os supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <enum name='firmware'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <value>efi</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <loader supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>rom</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>pflash</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='readonly'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>yes</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>no</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='secure'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>yes</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>no</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </loader>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   </os>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <cpu>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <mode name='host-passthrough' supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='hostPassthroughMigratable'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>on</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>off</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <mode name='maximum' supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='maximumMigratable'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>on</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>off</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <mode name='host-model' supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <vendor>AMD</vendor>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='x2apic'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc-deadline'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='hypervisor'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc_adjust'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='spec-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='stibp'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='ssbd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='cmp_legacy'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='overflow-recov'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='succor'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='ibrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='amd-ssbd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='virt-ssbd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='lbrv'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='tsc-scale'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='vmcb-clean'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='flushbyasid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='pause-filter'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='pfthreshold'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='svme-addr-chk'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <feature policy='disable' name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <mode name='custom' supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Broadwell'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Broadwell-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Broadwell-noTSX'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Broadwell-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Cascadelake-Server-v5'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='ClearwaterForest'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bhi-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ddpd-u'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sha512'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sm3'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sm4'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='ClearwaterForest-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bhi-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ddpd-u'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sha512'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sm3'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sm4'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Cooperlake'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Cooperlake-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Cooperlake-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Denverton'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Denverton-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Denverton-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Denverton-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Dhyana-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Genoa-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Milan-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Rome-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Turin'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='prefetchi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbpb'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-Turin-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amd-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='auto-ibrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vp2intersect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fs-gs-base-ns'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibpb-brtype'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='no-nested-data-bp'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='null-sel-clr-base'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='perfmon-v2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='prefetchi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbpb'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='srso-user-kernel-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='stibp-always-on'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='EPYC-v5'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx10'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx10-128'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx10-256'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx10-512'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='GraniteRapids-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx10'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx10-128'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx10-256'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx10-512'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='prefetchiti'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Haswell'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Haswell-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Haswell-noTSX'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Haswell-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Haswell-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Haswell-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Haswell-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-noTSX'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v5'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v6'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Icelake-Server-v7'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='IvyBridge'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='IvyBridge-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='KnightsMill'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512er'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512pf'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='KnightsMill-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-4fmaps'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-4vnniw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512er'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512pf'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Opteron_G4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Opteron_G4-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Opteron_G5'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='tbm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Opteron_G5-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fma4'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='tbm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xop'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SapphireRapids-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='amx-tile'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-bf16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-fp16'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512-vpopcntdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bitalg'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vbmi2'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrc'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fzrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='la57'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='taa-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='tsx-ldtrk'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SierraForest'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='SierraForest-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ifma'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-ne-convert'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx-vnni-int8'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bhi-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='bus-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cmpccxadd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fbsdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='fsrs'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ibrs-all'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='intel-psfd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ipred-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='lam'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mcdt-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pbrsb-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='psdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rrsba-ctrl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='sbdr-ssdp-no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='serialize'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vaes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='vpclmulqdq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Client-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='hle'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='rtm'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Skylake-Server-v5'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512bw'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512cd'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512dq'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512f'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='avx512vl'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='invpcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pcid'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='pku'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Snowridge'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='mpx'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v2'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v3'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='core-capability'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='split-lock-detect'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='Snowridge-v4'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='cldemote'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='erms'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='gfni'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdir64b'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='movdiri'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='xsaves'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='athlon'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='athlon-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='core2duo'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='core2duo-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='coreduo'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='coreduo-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='n270'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='n270-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='ss'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='phenom'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <blockers model='phenom-v1'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnow'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <feature name='3dnowext'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </blockers>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </mode>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   </cpu>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <memoryBacking supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <enum name='sourceType'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <value>file</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <value>anonymous</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <value>memfd</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   </memoryBacking>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <devices>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <disk supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='diskDevice'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>disk</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>cdrom</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>floppy</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>lun</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='bus'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>fdc</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>scsi</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>sata</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio-transitional</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio-non-transitional</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <graphics supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vnc</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>egl-headless</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>dbus</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </graphics>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <video supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='modelType'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vga</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>cirrus</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>none</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>bochs</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>ramfb</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </video>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <hostdev supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='mode'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>subsystem</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='startupPolicy'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>default</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>mandatory</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>requisite</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>optional</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='subsysType'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>pci</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>scsi</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='capsType'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='pciBackend'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </hostdev>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <rng supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio-transitional</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtio-non-transitional</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>random</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>egd</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>builtin</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </rng>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <filesystem supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='driverType'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>path</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>handle</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>virtiofs</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </filesystem>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <tpm supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>tpm-tis</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>tpm-crb</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>emulator</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>external</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='backendVersion'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>2.0</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </tpm>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <redirdev supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='bus'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>usb</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </redirdev>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <channel supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>pty</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>unix</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </channel>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <crypto supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='model'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>qemu</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='backendModel'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>builtin</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </crypto>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <interface supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='backendType'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>default</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>passt</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </interface>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <panic supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='model'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>isa</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>hyperv</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </panic>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <console supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='type'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>null</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vc</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>pty</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>dev</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>file</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>pipe</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>stdio</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>udp</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>tcp</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>unix</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>qemu-vdagent</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>dbus</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </console>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   </devices>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   <features>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <gic supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <vmcoreinfo supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <genid supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <backingStoreInput supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <backup supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <async-teardown supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <s390-pv supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <ps2 supported='yes'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <tdx supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <sev supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <sgx supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <hyperv supported='yes'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <enum name='features'>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>relaxed</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vapic</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>spinlocks</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vpindex</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>runtime</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>synic</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>stimer</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>reset</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>vendor_id</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>frequencies</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>reenlightenment</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>tlbflush</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>ipi</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>avic</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>emsr_bitmap</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <value>xmm_input</value>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </enum>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       <defaults>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <spinlocks>4095</spinlocks>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <stimer_direct>on</stimer_direct>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <tlbflush_direct>on</tlbflush_direct>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <tlbflush_extended>on</tlbflush_extended>
Jan 27 22:34:41 compute-0 nova_compute[185650]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 27 22:34:41 compute-0 nova_compute[185650]:       </defaults>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     </hyperv>
Jan 27 22:34:41 compute-0 nova_compute[185650]:     <launchSecurity supported='no'/>
Jan 27 22:34:41 compute-0 nova_compute[185650]:   </features>
Jan 27 22:34:41 compute-0 nova_compute[185650]: </domainCapabilities>
Jan 27 22:34:41 compute-0 nova_compute[185650]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.026 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.027 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.027 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.031 185654 INFO nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Secure Boot support detected
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.033 185654 INFO nova.virt.libvirt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.033 185654 INFO nova.virt.libvirt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.042 185654 DEBUG nova.virt.libvirt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.079 185654 INFO nova.virt.node [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Determined node identity 200c8b8b-d176-4e2d-a773-1ed54a9635a3 from /var/lib/nova/compute_id
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.115 185654 WARNING nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Compute nodes ['200c8b8b-d176-4e2d-a773-1ed54a9635a3'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.149 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.190 185654 WARNING nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.191 185654 DEBUG oslo_concurrency.lockutils [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.191 185654 DEBUG oslo_concurrency.lockutils [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.191 185654 DEBUG oslo_concurrency.lockutils [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.191 185654 DEBUG nova.compute.resource_tracker [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:34:41 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 27 22:34:41 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.449 185654 WARNING nova.virt.libvirt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.450 185654 DEBUG nova.compute.resource_tracker [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6086MB free_disk=72.64546966552734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.450 185654 DEBUG oslo_concurrency.lockutils [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.451 185654 DEBUG oslo_concurrency.lockutils [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.466 185654 WARNING nova.compute.resource_tracker [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] No compute node record for compute-0.ctlplane.example.com:200c8b8b-d176-4e2d-a773-1ed54a9635a3: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 200c8b8b-d176-4e2d-a773-1ed54a9635a3 could not be found.
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.485 185654 INFO nova.compute.resource_tracker [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 200c8b8b-d176-4e2d-a773-1ed54a9635a3
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.550 185654 DEBUG nova.compute.resource_tracker [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:34:41 compute-0 nova_compute[185650]: 2026-01-27 22:34:41.550 185654 DEBUG nova.compute.resource_tracker [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:34:42 compute-0 nova_compute[185650]: 2026-01-27 22:34:42.646 185654 INFO nova.scheduler.client.report [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [req-c5bbcefe-06a1-46f0-9473-62ca97446ef5] Created resource provider record via placement API for resource provider with UUID 200c8b8b-d176-4e2d-a773-1ed54a9635a3 and name compute-0.ctlplane.example.com.
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.250 185654 DEBUG nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 27 22:34:43 compute-0 nova_compute[185650]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.250 185654 INFO nova.virt.libvirt.host [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] kernel doesn't support AMD SEV
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.251 185654 DEBUG nova.compute.provider_tree [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Updating inventory in ProviderTree for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.251 185654 DEBUG nova.virt.libvirt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.300 185654 DEBUG nova.scheduler.client.report [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Updated inventory for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.300 185654 DEBUG nova.compute.provider_tree [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Updating resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.300 185654 DEBUG nova.compute.provider_tree [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Updating inventory in ProviderTree for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.405 185654 DEBUG nova.compute.provider_tree [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Updating resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.448 185654 DEBUG nova.compute.resource_tracker [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.449 185654 DEBUG oslo_concurrency.lockutils [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.449 185654 DEBUG nova.service [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.516 185654 DEBUG nova.service [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 27 22:34:43 compute-0 nova_compute[185650]: 2026-01-27 22:34:43.516 185654 DEBUG nova.servicegroup.drivers.db [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 27 22:34:44 compute-0 sshd-session[185990]: Accepted publickey for zuul from 192.168.122.30 port 60750 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:34:44 compute-0 systemd-logind[789]: New session 25 of user zuul.
Jan 27 22:34:44 compute-0 systemd[1]: Started Session 25 of User zuul.
Jan 27 22:34:44 compute-0 sshd-session[185990]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:34:45 compute-0 podman[186067]: 2026-01-27 22:34:45.415454343 +0000 UTC m=+0.100412638 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 22:34:45 compute-0 python3.9[186171]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:34:47 compute-0 sudo[186325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evuklooiabqunwcmxxwwffbozikvuvzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553286.320741-31-236395152156450/AnsiballZ_systemd_service.py'
Jan 27 22:34:47 compute-0 sudo[186325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:47 compute-0 python3.9[186327]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:34:47 compute-0 systemd[1]: Reloading.
Jan 27 22:34:47 compute-0 systemd-rc-local-generator[186353]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:34:47 compute-0 systemd-sysv-generator[186358]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:34:47 compute-0 sudo[186325]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:48 compute-0 python3.9[186512]: ansible-ansible.builtin.service_facts Invoked
Jan 27 22:34:48 compute-0 network[186529]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 22:34:48 compute-0 network[186530]: 'network-scripts' will be removed from distribution in near future.
Jan 27 22:34:48 compute-0 network[186531]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 22:34:53 compute-0 sudo[186801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyvxtablbcvufaeuhtdjguhlkzmncedy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553293.6662996-50-125970421630483/AnsiballZ_systemd_service.py'
Jan 27 22:34:53 compute-0 sudo[186801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:54 compute-0 python3.9[186803]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:34:54 compute-0 sudo[186801]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:54 compute-0 sudo[186954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkxhykrixxykvqaqkcdcwneabghrtwfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553294.494588-60-10119995899777/AnsiballZ_file.py'
Jan 27 22:34:54 compute-0 sudo[186954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:55 compute-0 python3.9[186956]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:34:55 compute-0 sudo[186954]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:55 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 22:34:55 compute-0 nova_compute[185650]: 2026-01-27 22:34:55.518 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:34:55 compute-0 nova_compute[185650]: 2026-01-27 22:34:55.577 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:34:55 compute-0 sudo[187107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naiwbuvsjprdazfhsbzbqskkoysfeecg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553295.2927318-68-34499550464348/AnsiballZ_file.py'
Jan 27 22:34:55 compute-0 sudo[187107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:55 compute-0 python3.9[187109]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:34:55 compute-0 sudo[187107]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:56 compute-0 sudo[187259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbkrjdljsjailtfwcpksyninkkqsniig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553296.120506-77-140550201786870/AnsiballZ_command.py'
Jan 27 22:34:56 compute-0 sudo[187259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:56 compute-0 python3.9[187261]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:34:56 compute-0 sudo[187259]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:57 compute-0 python3.9[187413]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 22:34:58 compute-0 sudo[187563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdiqwsznsljmcaykntfhvfwmtvlklzxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553297.859736-95-170948123315459/AnsiballZ_systemd_service.py'
Jan 27 22:34:58 compute-0 sudo[187563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:58 compute-0 python3.9[187565]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:34:58 compute-0 systemd[1]: Reloading.
Jan 27 22:34:58 compute-0 systemd-rc-local-generator[187583]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:34:58 compute-0 systemd-sysv-generator[187591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:34:58 compute-0 sudo[187563]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:59 compute-0 sudo[187749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nypwrxkuqnqhcfjlgmvuksdkjcmpqgrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553299.0211406-103-171626429979998/AnsiballZ_command.py'
Jan 27 22:34:59 compute-0 sudo[187749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:34:59 compute-0 python3.9[187751]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:34:59 compute-0 sudo[187749]: pam_unix(sudo:session): session closed for user root
Jan 27 22:34:59 compute-0 sudo[187902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-natflmclbnauqmqmlfrajwdxuxkvqtoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553299.711358-112-162227451915322/AnsiballZ_file.py'
Jan 27 22:34:59 compute-0 sudo[187902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:00 compute-0 python3.9[187904]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:00 compute-0 sudo[187902]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:00 compute-0 python3.9[188054]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:35:01 compute-0 sudo[188206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtiiawpvvalxshogkhitlxdbcspzpswd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553301.0757067-128-153883302362017/AnsiballZ_group.py'
Jan 27 22:35:01 compute-0 sudo[188206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:01 compute-0 python3.9[188208]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Jan 27 22:35:01 compute-0 sudo[188206]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:02 compute-0 sudo[188358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aesjrycgywwmnvefpbsuglbflqyypekk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553302.0629683-139-249740151633257/AnsiballZ_getent.py'
Jan 27 22:35:02 compute-0 sudo[188358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:02 compute-0 python3.9[188360]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 27 22:35:02 compute-0 sudo[188358]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:03 compute-0 sudo[188511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obmfcxixpwzikfqlsauzdsijkvnacouf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553302.8564863-147-227476334320584/AnsiballZ_group.py'
Jan 27 22:35:03 compute-0 sudo[188511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:03 compute-0 python3.9[188513]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 27 22:35:03 compute-0 groupadd[188514]: group added to /etc/group: name=ceilometer, GID=42405
Jan 27 22:35:03 compute-0 groupadd[188514]: group added to /etc/gshadow: name=ceilometer
Jan 27 22:35:03 compute-0 groupadd[188514]: new group: name=ceilometer, GID=42405
Jan 27 22:35:03 compute-0 sudo[188511]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:35:04.120 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:35:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:35:04.121 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:35:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:35:04.121 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:35:04 compute-0 sudo[188669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iubwuwxkfejgffvrianfhkuisqjuwpjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553303.6534178-155-104766008693690/AnsiballZ_user.py'
Jan 27 22:35:04 compute-0 sudo[188669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:04 compute-0 python3.9[188671]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 27 22:35:04 compute-0 useradd[188673]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Jan 27 22:35:04 compute-0 useradd[188673]: add 'ceilometer' to group 'libvirt'
Jan 27 22:35:04 compute-0 useradd[188673]: add 'ceilometer' to shadow group 'libvirt'
Jan 27 22:35:04 compute-0 sudo[188669]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:05 compute-0 python3.9[188829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:06 compute-0 python3.9[188950]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769553305.251719-181-66654837899597/.source.conf _original_basename=ceilometer.conf follow=False checksum=806b21daa538a66a80669be8bf74c414d178dfbc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:07 compute-0 python3.9[189100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:07 compute-0 python3.9[189221]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769553306.5288975-181-115207854813632/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:08 compute-0 python3.9[189371]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:08 compute-0 podman[189466]: 2026-01-27 22:35:08.489453636 +0000 UTC m=+0.049317888 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 22:35:08 compute-0 python3.9[189505]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769553307.7038379-181-204297807256005/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:09 compute-0 python3.9[189661]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:35:09 compute-0 python3.9[189813]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:35:10 compute-0 python3.9[189965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:11 compute-0 python3.9[190086]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553310.013299-240-184715282269963/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:11 compute-0 python3.9[190236]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:12 compute-0 python3.9[190357]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553311.2192526-240-81436113822418/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=87dede51a10e22722618c1900db75cb764463d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:12 compute-0 python3.9[190507]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:13 compute-0 python3.9[190628]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553312.2851942-269-165366174444242/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:13 compute-0 python3.9[190778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:14 compute-0 python3.9[190899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553313.52395-285-86375923890559/.source.yaml _original_basename=node_exporter.yaml follow=False checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:15 compute-0 python3.9[191049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:15 compute-0 podman[191144]: 2026-01-27 22:35:15.579481626 +0000 UTC m=+0.100511991 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 22:35:15 compute-0 python3.9[191190]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553314.7008965-300-169844231466739/.source.yaml _original_basename=podman_exporter.yaml follow=False checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:16 compute-0 python3.9[191348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:16 compute-0 python3.9[191469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553315.929184-315-279091187665304/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:17 compute-0 sudo[191619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxxvfgnifigdiatlortlcwmpafotibum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553317.1293304-330-71581264634814/AnsiballZ_file.py'
Jan 27 22:35:17 compute-0 sudo[191619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:17 compute-0 python3.9[191621]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:17 compute-0 sudo[191619]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:18 compute-0 sudo[191771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snyoaiygodjaszqeydwqjrngiryrnvmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553317.8747237-338-30360815736856/AnsiballZ_file.py'
Jan 27 22:35:18 compute-0 sudo[191771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:18 compute-0 python3.9[191773]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:18 compute-0 sudo[191771]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:18 compute-0 python3.9[191923]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:35:19 compute-0 python3.9[192075]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:35:20 compute-0 python3.9[192227]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:35:20 compute-0 sudo[192379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvfbzxlpdqigjzsvxkqlonfqzzrwjffj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553320.4454577-370-217465783053831/AnsiballZ_file.py'
Jan 27 22:35:20 compute-0 sudo[192379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:20 compute-0 python3.9[192381]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:20 compute-0 sudo[192379]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:21 compute-0 sudo[192531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbjyhdxxiwxroobgpjxaotkvrwyhowst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553321.015791-378-90106738926506/AnsiballZ_systemd_service.py'
Jan 27 22:35:21 compute-0 sudo[192531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:21 compute-0 python3.9[192533]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:35:21 compute-0 systemd[1]: Reloading.
Jan 27 22:35:21 compute-0 systemd-rc-local-generator[192563]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:35:21 compute-0 systemd-sysv-generator[192567]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:35:21 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 27 22:35:22 compute-0 sudo[192531]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:22 compute-0 sudo[192721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edxpfjuovezxxqyvyvdicyalubqsmsai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553322.244762-387-129234755489980/AnsiballZ_stat.py'
Jan 27 22:35:22 compute-0 sudo[192721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:22 compute-0 python3.9[192723]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:22 compute-0 sudo[192721]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:22 compute-0 sudo[192844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifhqgdtslnkpxaoizmgvcewralblboty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553322.244762-387-129234755489980/AnsiballZ_copy.py'
Jan 27 22:35:22 compute-0 sudo[192844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:23 compute-0 python3.9[192846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553322.244762-387-129234755489980/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:23 compute-0 sudo[192844]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:23 compute-0 sudo[192920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnlqprvxirthojiahxbgpdectitktemp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553322.244762-387-129234755489980/AnsiballZ_stat.py'
Jan 27 22:35:23 compute-0 sudo[192920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:23 compute-0 python3.9[192922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:23 compute-0 sudo[192920]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:24 compute-0 sudo[193043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqgxmqwyjplwxxmgwhdnanejaerneecf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553322.244762-387-129234755489980/AnsiballZ_copy.py'
Jan 27 22:35:24 compute-0 sudo[193043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:24 compute-0 python3.9[193045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553322.244762-387-129234755489980/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:24 compute-0 sudo[193043]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:24 compute-0 sudo[193195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gikzepnjraqoooontqnhltesvkfmijep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553324.7492492-419-198779574617557/AnsiballZ_file.py'
Jan 27 22:35:24 compute-0 sudo[193195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:25 compute-0 python3.9[193197]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:25 compute-0 sudo[193195]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:25 compute-0 sudo[193347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsxmurleaynyhtbcigsnnprnshlomwah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553325.3312037-427-63764166493861/AnsiballZ_file.py'
Jan 27 22:35:25 compute-0 sudo[193347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:25 compute-0 python3.9[193349]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:25 compute-0 sudo[193347]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:26 compute-0 sudo[193499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovedcxixanghnstezerdlfslktwsrkue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553325.9769413-435-7774407148064/AnsiballZ_stat.py'
Jan 27 22:35:26 compute-0 sudo[193499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:26 compute-0 python3.9[193501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:26 compute-0 sudo[193499]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:26 compute-0 sudo[193622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvkueqauekowsypkmnghqeebkflvlcvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553325.9769413-435-7774407148064/AnsiballZ_copy.py'
Jan 27 22:35:26 compute-0 sudo[193622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:26 compute-0 python3.9[193624]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553325.9769413-435-7774407148064/.source.json _original_basename=.njmlvyzz follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:26 compute-0 sudo[193622]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:27 compute-0 python3.9[193774]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:29 compute-0 sudo[194195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yooxsriaefkzsykurevtuxwumwphtijd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553329.007332-475-24461702942552/AnsiballZ_container_config_data.py'
Jan 27 22:35:29 compute-0 sudo[194195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:29 compute-0 python3.9[194197]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Jan 27 22:35:29 compute-0 sudo[194195]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:30 compute-0 sudo[194347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owsukscjgypxiyveuuwgojykmanpmcrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553329.9476562-486-216907433390002/AnsiballZ_container_config_hash.py'
Jan 27 22:35:30 compute-0 sudo[194347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:30 compute-0 python3.9[194349]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 22:35:30 compute-0 sudo[194347]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:31 compute-0 sudo[194499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jesavcmyvsenscsfpfwfljmpvakcntxv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553330.8192358-496-180861540451046/AnsiballZ_edpm_container_manage.py'
Jan 27 22:35:31 compute-0 sudo[194499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:31 compute-0 python3[194501]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 22:35:31 compute-0 podman[194538]: 2026-01-27 22:35:31.734587676 +0000 UTC m=+0.049427729 container create 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute)
Jan 27 22:35:31 compute-0 podman[194538]: 2026-01-27 22:35:31.707461496 +0000 UTC m=+0.022301569 image pull 68a60f9093568ce7a1c5b4524fb1e8f03692d56fcec899fd30bbb31f7cc46992 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 27 22:35:31 compute-0 python3[194501]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Jan 27 22:35:31 compute-0 sudo[194499]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:32 compute-0 sudo[194726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpdycymepinnmonvmiyoybwfbqfrvlne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553332.0376272-504-239399024725242/AnsiballZ_stat.py'
Jan 27 22:35:32 compute-0 sudo[194726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:32 compute-0 python3.9[194728]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:35:32 compute-0 sudo[194726]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:33 compute-0 sudo[194880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfuipexskxtznnyxxedstlnzeutxnbzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553332.7554617-513-214257364808352/AnsiballZ_file.py'
Jan 27 22:35:33 compute-0 sudo[194880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:33 compute-0 python3.9[194882]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:33 compute-0 sudo[194880]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:33 compute-0 sudo[194956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maixnbmrpyicqnjqynkqflvqtqnrtooj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553332.7554617-513-214257364808352/AnsiballZ_stat.py'
Jan 27 22:35:33 compute-0 sudo[194956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:33 compute-0 python3.9[194958]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:35:33 compute-0 sudo[194956]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:34 compute-0 sudo[195107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obtfegvzamtkrtpumamohxaldhyrqbdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553333.7075229-513-112599462696690/AnsiballZ_copy.py'
Jan 27 22:35:34 compute-0 sudo[195107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:34 compute-0 python3.9[195109]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769553333.7075229-513-112599462696690/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:34 compute-0 sudo[195107]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:35 compute-0 sudo[195183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccyyjxgghexhhdsyuukrsspjtxnrquru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553333.7075229-513-112599462696690/AnsiballZ_systemd.py'
Jan 27 22:35:35 compute-0 sudo[195183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:35 compute-0 python3.9[195185]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:35:35 compute-0 systemd[1]: Reloading.
Jan 27 22:35:35 compute-0 systemd-sysv-generator[195216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:35:35 compute-0 systemd-rc-local-generator[195213]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:35:35 compute-0 sudo[195183]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:35 compute-0 sudo[195294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkktnustrrrdmcsfbpldzecxguuygxwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553333.7075229-513-112599462696690/AnsiballZ_systemd.py'
Jan 27 22:35:35 compute-0 sudo[195294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:36 compute-0 python3.9[195296]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:35:36 compute-0 systemd[1]: Reloading.
Jan 27 22:35:36 compute-0 systemd-rc-local-generator[195328]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:35:36 compute-0 systemd-sysv-generator[195332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:35:36 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Jan 27 22:35:36 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58395a60d44db59cc2cdc5bcd8552890026827a974549d23576bb92fd253bf9f/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 22:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58395a60d44db59cc2cdc5bcd8552890026827a974549d23576bb92fd253bf9f/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 22:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58395a60d44db59cc2cdc5bcd8552890026827a974549d23576bb92fd253bf9f/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 27 22:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58395a60d44db59cc2cdc5bcd8552890026827a974549d23576bb92fd253bf9f/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 27 22:35:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617.
Jan 27 22:35:36 compute-0 podman[195338]: 2026-01-27 22:35:36.747321303 +0000 UTC m=+0.130255142 container init 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: + sudo -E kolla_set_configs
Jan 27 22:35:36 compute-0 sudo[195360]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: sudo: unable to send audit message: Operation not permitted
Jan 27 22:35:36 compute-0 sudo[195360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 22:35:36 compute-0 podman[195338]: 2026-01-27 22:35:36.784003961 +0000 UTC m=+0.166937780 container start 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Jan 27 22:35:36 compute-0 podman[195338]: ceilometer_agent_compute
Jan 27 22:35:36 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Jan 27 22:35:36 compute-0 sudo[195294]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Validating config file
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Copying service configuration files
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: INFO:__main__:Writing out command to execute
Jan 27 22:35:36 compute-0 sudo[195360]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: ++ cat /run_command
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: + ARGS=
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: + sudo kolla_copy_cacerts
Jan 27 22:35:36 compute-0 sudo[195387]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: sudo: unable to send audit message: Operation not permitted
Jan 27 22:35:36 compute-0 sudo[195387]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 22:35:36 compute-0 sudo[195387]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: + [[ ! -n '' ]]
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: + . kolla_extend_start
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: + umask 0022
Jan 27 22:35:36 compute-0 ceilometer_agent_compute[195354]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Jan 27 22:35:36 compute-0 podman[195361]: 2026-01-27 22:35:36.881502464 +0000 UTC m=+0.081464554 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 27 22:35:36 compute-0 systemd[1]: 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617-3a283224e5730c2.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 22:35:36 compute-0 systemd[1]: 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617-3a283224e5730c2.service: Failed with result 'exit-code'.
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.645 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.645 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.646 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.646 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.646 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.646 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.646 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.646 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.646 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.646 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.646 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.646 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.647 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.647 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.647 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.647 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.647 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.647 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.647 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.647 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.647 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.648 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.649 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.650 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.651 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.652 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.652 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.652 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.652 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.652 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.652 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.652 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.652 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.652 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.652 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.652 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.653 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.653 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.653 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.653 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.653 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.653 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.653 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.653 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.653 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.653 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.654 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.655 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.656 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.657 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.658 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.681 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.682 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.683 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.683 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.683 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.683 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.683 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.683 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.683 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.684 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.684 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.684 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.684 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.684 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.684 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.684 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.684 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.685 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.685 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.685 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.685 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.685 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.685 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.685 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.685 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.686 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.686 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.686 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.686 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.686 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.686 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.686 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.686 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.686 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.687 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.687 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.687 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.687 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.687 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.687 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.687 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.687 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.687 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.688 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.688 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.688 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.688 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.688 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.688 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.688 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.688 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.688 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.689 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.689 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.689 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.689 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.689 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.689 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.689 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.689 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.690 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.690 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.690 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.690 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.690 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.690 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.690 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.690 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.690 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.691 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.691 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.691 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.691 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.691 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.691 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.691 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.691 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.692 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.692 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.692 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.692 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.692 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.692 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.692 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.692 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.693 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.693 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.693 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.693 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.693 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.693 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.693 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.693 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.693 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.694 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.694 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.694 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.694 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.694 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.694 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.694 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.694 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.694 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.695 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.695 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.695 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.695 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.695 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.695 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.695 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.695 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.696 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.696 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.696 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.696 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.696 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.696 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.696 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.696 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.696 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.697 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.697 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.697 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.697 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.697 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.697 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.697 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.697 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.698 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.698 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.698 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.698 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.698 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.698 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.698 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.698 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.698 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.699 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.699 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.699 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.699 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.699 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.699 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.699 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.699 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.700 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.700 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.700 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.700 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.700 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.700 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.700 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.700 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.700 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.703 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.705 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.706 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Jan 27 22:35:37 compute-0 python3.9[195535]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.926 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.934 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.934 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 27 22:35:37 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:37.934 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.057 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.057 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.057 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.057 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.057 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.057 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.057 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.058 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.058 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.058 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.058 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.058 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.058 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.058 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.058 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.059 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.059 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.059 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.059 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.059 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.059 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.059 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.059 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.059 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.059 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.060 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.061 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.062 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.062 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.062 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.062 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.062 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.062 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.062 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.062 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.062 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.062 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.062 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.063 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.064 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.065 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.066 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.066 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.066 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.066 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.066 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.066 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.066 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.066 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.066 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.066 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.066 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.067 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.068 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.069 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.070 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.075 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.099 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.099 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.101 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:35:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:35:38 compute-0 sudo[195698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jndcssvucdvewlaiiozjaeehpsqggxpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553338.1905658-558-113587531177466/AnsiballZ_stat.py'
Jan 27 22:35:38 compute-0 sudo[195698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:38 compute-0 python3.9[195700]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:38 compute-0 sudo[195698]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:38 compute-0 podman[195701]: 2026-01-27 22:35:38.773639508 +0000 UTC m=+0.073680506 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 22:35:39 compute-0 sudo[195842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujonzfawnowpyxaiwqlhbfcjyqkdrhga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553338.1905658-558-113587531177466/AnsiballZ_copy.py'
Jan 27 22:35:39 compute-0 sudo[195842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:39 compute-0 python3.9[195844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553338.1905658-558-113587531177466/.source.yaml _original_basename=.xnglt3kn follow=False checksum=3e088fd8902e0bcd11bd87fba3f2566b527c36d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:39 compute-0 sudo[195842]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:39 compute-0 sudo[195994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opgsebnnskgjidibxgtxgpvcyyugknlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553339.622338-573-183027125277376/AnsiballZ_stat.py'
Jan 27 22:35:39 compute-0 sudo[195994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:39 compute-0 nova_compute[185650]: 2026-01-27 22:35:39.995 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:35:39 compute-0 nova_compute[185650]: 2026-01-27 22:35:39.996 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:35:39 compute-0 nova_compute[185650]: 2026-01-27 22:35:39.996 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:35:39 compute-0 nova_compute[185650]: 2026-01-27 22:35:39.996 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.114 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.115 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.116 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.116 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.117 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.117 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.118 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.118 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.119 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:35:40 compute-0 python3.9[195996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:40 compute-0 sudo[195994]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.163 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.163 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.164 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.164 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.331 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.333 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5902MB free_disk=72.64413833618164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.333 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.333 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:35:40 compute-0 sudo[196117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syfxkofnbgoacmxzjhoykctbouyfivta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553339.622338-573-183027125277376/AnsiballZ_copy.py'
Jan 27 22:35:40 compute-0 sudo[196117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.530 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.530 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.553 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.616 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.620 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:35:40 compute-0 nova_compute[185650]: 2026-01-27 22:35:40.621 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:35:40 compute-0 python3.9[196119]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553339.622338-573-183027125277376/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:40 compute-0 sudo[196117]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:41 compute-0 sudo[196269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqtzjigvtleguzupwdltmvjuydsdppqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553341.226302-594-145338649632235/AnsiballZ_file.py'
Jan 27 22:35:41 compute-0 sudo[196269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:41 compute-0 python3.9[196271]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:41 compute-0 sudo[196269]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:42 compute-0 sudo[196421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsbtycmtdemeuhjtklythliujefwotii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553341.8972638-602-103128151921203/AnsiballZ_file.py'
Jan 27 22:35:42 compute-0 sudo[196421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:42 compute-0 python3.9[196423]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:42 compute-0 sudo[196421]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:42 compute-0 sudo[196573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjeyuetzfhlezujjygdudauukdltulmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553342.6433198-610-123974615980519/AnsiballZ_stat.py'
Jan 27 22:35:42 compute-0 sudo[196573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:43 compute-0 python3.9[196575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:43 compute-0 sudo[196573]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:43 compute-0 sudo[196651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkyfcnyudptcgcmeckobflgheqwauqvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553342.6433198-610-123974615980519/AnsiballZ_file.py'
Jan 27 22:35:43 compute-0 sudo[196651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:43 compute-0 python3.9[196653]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.nuuwywq8 recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:43 compute-0 sudo[196651]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:44 compute-0 python3.9[196803]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:46 compute-0 sudo[197234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcuykggjoasnlkiomrqarvuffuxjddng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553345.935593-647-227181288305746/AnsiballZ_container_config_data.py'
Jan 27 22:35:46 compute-0 sudo[197234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:46 compute-0 podman[197198]: 2026-01-27 22:35:46.321379691 +0000 UTC m=+0.133745959 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 22:35:46 compute-0 python3.9[197243]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Jan 27 22:35:46 compute-0 sudo[197234]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:47 compute-0 sudo[197402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeojhpnxdshdivzqeeldwryddhxshmkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553347.0071418-658-207457216845443/AnsiballZ_container_config_hash.py'
Jan 27 22:35:47 compute-0 sudo[197402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:47 compute-0 python3.9[197404]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 22:35:47 compute-0 sudo[197402]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:48 compute-0 sudo[197554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfuvebfirhoxhdhakssmatgrrhnhdhti ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553347.9012647-668-96292645068701/AnsiballZ_edpm_container_manage.py'
Jan 27 22:35:48 compute-0 sudo[197554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:48 compute-0 python3[197556]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 22:35:48 compute-0 podman[197594]: 2026-01-27 22:35:48.749775926 +0000 UTC m=+0.054437116 container create f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=node_exporter, container_name=node_exporter, managed_by=edpm_ansible)
Jan 27 22:35:48 compute-0 podman[197594]: 2026-01-27 22:35:48.720978739 +0000 UTC m=+0.025640019 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 27 22:35:48 compute-0 python3[197556]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Jan 27 22:35:48 compute-0 sudo[197554]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:49 compute-0 sudo[197782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izyzmmfkhxedmdcekqdppypvcgrggqcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553349.153703-676-196970291629937/AnsiballZ_stat.py'
Jan 27 22:35:49 compute-0 sudo[197782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:49 compute-0 python3.9[197784]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:35:49 compute-0 sudo[197782]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:50 compute-0 sudo[197936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsffvrwuctxtgjxnbxtifqvylleftpdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553350.0064158-685-167940406062940/AnsiballZ_file.py'
Jan 27 22:35:50 compute-0 sudo[197936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:50 compute-0 python3.9[197938]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:50 compute-0 sudo[197936]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:50 compute-0 sudo[198012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kovyhlizglqooarwesyosedliudfpdnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553350.0064158-685-167940406062940/AnsiballZ_stat.py'
Jan 27 22:35:50 compute-0 sudo[198012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:50 compute-0 python3.9[198014]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:35:50 compute-0 sudo[198012]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:51 compute-0 sudo[198163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irprwurbqwhjvcknqtujjdtocgdihpby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553351.0654418-685-71608843816807/AnsiballZ_copy.py'
Jan 27 22:35:51 compute-0 sudo[198163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:51 compute-0 python3.9[198165]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769553351.0654418-685-71608843816807/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:51 compute-0 sudo[198163]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:51 compute-0 sudo[198239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsxzhspvpwklfyybytfoliyrwtfpcllk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553351.0654418-685-71608843816807/AnsiballZ_systemd.py'
Jan 27 22:35:51 compute-0 sudo[198239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:52 compute-0 python3.9[198241]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:35:52 compute-0 systemd[1]: Reloading.
Jan 27 22:35:52 compute-0 systemd-sysv-generator[198272]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:35:52 compute-0 systemd-rc-local-generator[198267]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:35:52 compute-0 sudo[198239]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:52 compute-0 sudo[198351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilurytawaspbuvddohumlzjieojqtkll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553351.0654418-685-71608843816807/AnsiballZ_systemd.py'
Jan 27 22:35:52 compute-0 sudo[198351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:53 compute-0 python3.9[198353]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:35:53 compute-0 systemd[1]: Reloading.
Jan 27 22:35:53 compute-0 systemd-rc-local-generator[198383]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:35:53 compute-0 systemd-sysv-generator[198387]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:35:53 compute-0 systemd[1]: Starting node_exporter container...
Jan 27 22:35:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3585ec29c11d6d3d4510eb83c9a25f80009be687c9018492bf9f6d3d880d2915/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 22:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3585ec29c11d6d3d4510eb83c9a25f80009be687c9018492bf9f6d3d880d2915/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 22:35:53 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de.
Jan 27 22:35:53 compute-0 podman[198393]: 2026-01-27 22:35:53.649311204 +0000 UTC m=+0.144063218 container init f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.669Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.669Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.669Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.670Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.670Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.670Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.670Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.670Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.670Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=arp
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=bcache
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=bonding
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=cpu
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=edac
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=filefd
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=netclass
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=netdev
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=netstat
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=nfs
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=nvme
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=softnet
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=systemd
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=xfs
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.671Z caller=node_exporter.go:117 level=info collector=zfs
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.672Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Jan 27 22:35:53 compute-0 podman[198393]: 2026-01-27 22:35:53.672807333 +0000 UTC m=+0.167559307 container start f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 22:35:53 compute-0 node_exporter[198408]: ts=2026-01-27T22:35:53.673Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Jan 27 22:35:53 compute-0 podman[198393]: node_exporter
Jan 27 22:35:53 compute-0 systemd[1]: Started node_exporter container.
Jan 27 22:35:53 compute-0 sudo[198351]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:53 compute-0 podman[198417]: 2026-01-27 22:35:53.742906208 +0000 UTC m=+0.059044936 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 22:35:54 compute-0 python3.9[198590]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 22:35:55 compute-0 sudo[198740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elcgsezcpdrjqmetiwhdxusjoiytjvik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553354.9874473-730-155205072568070/AnsiballZ_stat.py'
Jan 27 22:35:55 compute-0 sudo[198740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:55 compute-0 python3.9[198742]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:55 compute-0 sudo[198740]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:56 compute-0 sudo[198865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmvgoxokshclkllnhcqkblfumanthurg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553354.9874473-730-155205072568070/AnsiballZ_copy.py'
Jan 27 22:35:56 compute-0 sudo[198865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:56 compute-0 python3.9[198867]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553354.9874473-730-155205072568070/.source.yaml _original_basename=._v77tqz9 follow=False checksum=bf159b4004a2b067e28a7f160121477fe6910fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:56 compute-0 sudo[198865]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:56 compute-0 sudo[199017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpcdzuugndzbpbakdtuxkmeizpfzigoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553356.4271324-745-67301449535908/AnsiballZ_stat.py'
Jan 27 22:35:56 compute-0 sudo[199017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:56 compute-0 python3.9[199019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:56 compute-0 sudo[199017]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:57 compute-0 sudo[199140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kevtghvaeihiitfmqoydwqgwgceqeslk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553356.4271324-745-67301449535908/AnsiballZ_copy.py'
Jan 27 22:35:57 compute-0 sudo[199140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:57 compute-0 python3.9[199142]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553356.4271324-745-67301449535908/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:57 compute-0 sudo[199140]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:58 compute-0 sudo[199292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmzodzkyrnksegpvdbuuddlmeptiguov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553357.9336438-766-185282937463261/AnsiballZ_file.py'
Jan 27 22:35:58 compute-0 sudo[199292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:58 compute-0 python3.9[199294]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:35:58 compute-0 sudo[199292]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:58 compute-0 sudo[199444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvezukxbdquqvspxgsocipjlteemttzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553358.6524973-774-271544529814388/AnsiballZ_file.py'
Jan 27 22:35:58 compute-0 sudo[199444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:59 compute-0 python3.9[199446]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:35:59 compute-0 sudo[199444]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:59 compute-0 sudo[199596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmncaoprctcvymuygkbsspwlhqckmiid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553359.2852058-782-28783857033431/AnsiballZ_stat.py'
Jan 27 22:35:59 compute-0 sudo[199596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:35:59 compute-0 python3.9[199598]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:35:59 compute-0 sudo[199596]: pam_unix(sudo:session): session closed for user root
Jan 27 22:35:59 compute-0 sudo[199674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzivezhztqgbamqlxgdufahjsbmuadlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553359.2852058-782-28783857033431/AnsiballZ_file.py'
Jan 27 22:35:59 compute-0 sudo[199674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:00 compute-0 python3.9[199676]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.ujwp2dxp recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:00 compute-0 sudo[199674]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:00 compute-0 python3.9[199826]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:02 compute-0 sudo[200247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkswphzuhskohocsqfnpzhltdvxnhaox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553362.2347906-819-265410493294634/AnsiballZ_container_config_data.py'
Jan 27 22:36:02 compute-0 sudo[200247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:02 compute-0 python3.9[200249]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Jan 27 22:36:02 compute-0 sudo[200247]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:03 compute-0 sudo[200399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsqyfvyozninkqrputxpksixbcvmisct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553363.0532033-830-85541874476025/AnsiballZ_container_config_hash.py'
Jan 27 22:36:03 compute-0 sudo[200399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:03 compute-0 python3.9[200401]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 22:36:03 compute-0 sudo[200399]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:04 compute-0 sudo[200551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqfikwjozmztdntojpjwyuhgnsnbkgta ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553363.7723954-840-269731769800144/AnsiballZ_edpm_container_manage.py'
Jan 27 22:36:04 compute-0 sudo[200551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:36:04.121 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:36:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:36:04.122 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:36:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:36:04.122 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:36:04 compute-0 python3[200553]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 22:36:05 compute-0 podman[200567]: 2026-01-27 22:36:05.413016515 +0000 UTC m=+1.032000066 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 27 22:36:05 compute-0 podman[200664]: 2026-01-27 22:36:05.54451268 +0000 UTC m=+0.056346220 container create 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=podman_exporter)
Jan 27 22:36:05 compute-0 podman[200664]: 2026-01-27 22:36:05.509294524 +0000 UTC m=+0.021128164 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 27 22:36:05 compute-0 python3[200553]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Jan 27 22:36:05 compute-0 sudo[200551]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:06 compute-0 sudo[200852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvmljfpkrarfhmyrntwuysqflagkcqfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553365.8167276-848-205071559165694/AnsiballZ_stat.py'
Jan 27 22:36:06 compute-0 sudo[200852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:06 compute-0 python3.9[200854]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:36:06 compute-0 sudo[200852]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:06 compute-0 sudo[201006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szivyamwbscowndwkonoueunvjnbfmyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553366.5559633-857-87353106423976/AnsiballZ_file.py'
Jan 27 22:36:06 compute-0 sudo[201006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:06 compute-0 python3.9[201008]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:06 compute-0 sudo[201006]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:07 compute-0 sudo[201092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldsnhoiwsmibafcruqnyapbiyogyagrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553366.5559633-857-87353106423976/AnsiballZ_stat.py'
Jan 27 22:36:07 compute-0 sudo[201092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:07 compute-0 podman[201056]: 2026-01-27 22:36:07.252590486 +0000 UTC m=+0.060573368 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126)
Jan 27 22:36:07 compute-0 systemd[1]: 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617-3a283224e5730c2.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 22:36:07 compute-0 systemd[1]: 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617-3a283224e5730c2.service: Failed with result 'exit-code'.
Jan 27 22:36:07 compute-0 python3.9[201101]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:36:07 compute-0 sudo[201092]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:07 compute-0 sudo[201253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvbuxwluswwqhgnfckqfnhkrbqminyih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553367.5171044-857-42992197346627/AnsiballZ_copy.py'
Jan 27 22:36:07 compute-0 sudo[201253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:08 compute-0 python3.9[201255]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769553367.5171044-857-42992197346627/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:08 compute-0 sudo[201253]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:08 compute-0 sudo[201329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzlhshpzkadlinqvfbrhzacozfggzpai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553367.5171044-857-42992197346627/AnsiballZ_systemd.py'
Jan 27 22:36:08 compute-0 sudo[201329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:08 compute-0 python3.9[201331]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:36:08 compute-0 systemd[1]: Reloading.
Jan 27 22:36:08 compute-0 systemd-rc-local-generator[201357]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:36:08 compute-0 systemd-sysv-generator[201361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:36:08 compute-0 sudo[201329]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:09 compute-0 podman[201366]: 2026-01-27 22:36:09.044864562 +0000 UTC m=+0.071876736 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 22:36:09 compute-0 sudo[201459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtywqaoagbzbmwaiikspnmongvtwoath ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553367.5171044-857-42992197346627/AnsiballZ_systemd.py'
Jan 27 22:36:09 compute-0 sudo[201459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:09 compute-0 python3.9[201461]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:36:09 compute-0 systemd[1]: Reloading.
Jan 27 22:36:09 compute-0 systemd-rc-local-generator[201494]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:36:09 compute-0 systemd-sysv-generator[201497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:36:09 compute-0 systemd[1]: Starting podman_exporter container...
Jan 27 22:36:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e80683895338f42ff1a2e4e30ea20c8f9e7b544f9b278763408ea7df696d890/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 22:36:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e80683895338f42ff1a2e4e30ea20c8f9e7b544f9b278763408ea7df696d890/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 22:36:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b.
Jan 27 22:36:10 compute-0 podman[201502]: 2026-01-27 22:36:10.09658733 +0000 UTC m=+0.140169420 container init 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:36:10 compute-0 podman_exporter[201518]: ts=2026-01-27T22:36:10.112Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Jan 27 22:36:10 compute-0 podman_exporter[201518]: ts=2026-01-27T22:36:10.113Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Jan 27 22:36:10 compute-0 podman_exporter[201518]: ts=2026-01-27T22:36:10.113Z caller=handler.go:94 level=info msg="enabled collectors"
Jan 27 22:36:10 compute-0 podman_exporter[201518]: ts=2026-01-27T22:36:10.113Z caller=handler.go:105 level=info collector=container
Jan 27 22:36:10 compute-0 podman[201502]: 2026-01-27 22:36:10.123045512 +0000 UTC m=+0.166627582 container start 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:36:10 compute-0 podman[201502]: podman_exporter
Jan 27 22:36:10 compute-0 systemd[1]: Starting Podman API Service...
Jan 27 22:36:10 compute-0 systemd[1]: Started Podman API Service.
Jan 27 22:36:10 compute-0 systemd[1]: Started podman_exporter container.
Jan 27 22:36:10 compute-0 podman[201529]: time="2026-01-27T22:36:10Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 27 22:36:10 compute-0 podman[201529]: time="2026-01-27T22:36:10Z" level=info msg="Setting parallel job count to 25"
Jan 27 22:36:10 compute-0 podman[201529]: time="2026-01-27T22:36:10Z" level=info msg="Using sqlite as database backend"
Jan 27 22:36:10 compute-0 podman[201529]: time="2026-01-27T22:36:10Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 27 22:36:10 compute-0 podman[201529]: time="2026-01-27T22:36:10Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 27 22:36:10 compute-0 podman[201529]: time="2026-01-27T22:36:10Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 27 22:36:10 compute-0 sudo[201459]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:10 compute-0 podman[201529]: @ - - [27/Jan/2026:22:36:10 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Jan 27 22:36:10 compute-0 podman[201529]: time="2026-01-27T22:36:10Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:36:10 compute-0 podman[201528]: 2026-01-27 22:36:10.185303787 +0000 UTC m=+0.053677705 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 22:36:10 compute-0 systemd[1]: 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b-5646d525bfc070ea.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 22:36:10 compute-0 systemd[1]: 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b-5646d525bfc070ea.service: Failed with result 'exit-code'.
Jan 27 22:36:10 compute-0 podman[201529]: @ - - [27/Jan/2026:22:36:10 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 18095 "" "Go-http-client/1.1"
Jan 27 22:36:10 compute-0 podman_exporter[201518]: ts=2026-01-27T22:36:10.196Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Jan 27 22:36:10 compute-0 podman_exporter[201518]: ts=2026-01-27T22:36:10.196Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Jan 27 22:36:10 compute-0 podman_exporter[201518]: ts=2026-01-27T22:36:10.197Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Jan 27 22:36:10 compute-0 python3.9[201718]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 22:36:11 compute-0 sudo[201868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcnmxvmrapenfmvjaaxmqttdfuvgclqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553371.2505884-902-67116664161977/AnsiballZ_stat.py'
Jan 27 22:36:11 compute-0 sudo[201868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:11 compute-0 python3.9[201870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:36:11 compute-0 sudo[201868]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:12 compute-0 rsyslogd[1003]: imjournal from <np0005598180:python3.9>: begin to drop messages due to rate-limiting
Jan 27 22:36:12 compute-0 sudo[201993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efncxrbxorckgykyivcstkhxykutcalo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553371.2505884-902-67116664161977/AnsiballZ_copy.py'
Jan 27 22:36:12 compute-0 sudo[201993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:12 compute-0 python3.9[201995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553371.2505884-902-67116664161977/.source.yaml _original_basename=.6xwn_kjj follow=False checksum=9fe9b4bb90759d22a6b478dc45d94962badfec85 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:12 compute-0 sudo[201993]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:12 compute-0 auditd[700]: Audit daemon rotating log files
Jan 27 22:36:12 compute-0 sudo[202145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tykwgndwqwrephjxljjzgnsogkyrnbzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553372.411451-917-135365572922418/AnsiballZ_stat.py'
Jan 27 22:36:12 compute-0 sudo[202145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:12 compute-0 python3.9[202147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:36:12 compute-0 sudo[202145]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:13 compute-0 sudo[202268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnuauyxrivrhskuojibsntbxnwbzperc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553372.411451-917-135365572922418/AnsiballZ_copy.py'
Jan 27 22:36:13 compute-0 sudo[202268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:13 compute-0 python3.9[202270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553372.411451-917-135365572922418/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:36:13 compute-0 sudo[202268]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:14 compute-0 sudo[202420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjtnmsmmnslcudasaqxsfjrfcaxctncy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553373.981633-938-96423647884673/AnsiballZ_file.py'
Jan 27 22:36:14 compute-0 sudo[202420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:14 compute-0 python3.9[202422]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:14 compute-0 sudo[202420]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:14 compute-0 sudo[202572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aymfajobeeeoqlajeqwdsuyiyflxnonc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553374.6929266-946-148664532565080/AnsiballZ_file.py'
Jan 27 22:36:14 compute-0 sudo[202572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:15 compute-0 python3.9[202574]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:36:15 compute-0 sudo[202572]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:15 compute-0 sudo[202724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmlcdlpzadavmbglxolgiyffeukypvjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553375.3463428-954-62747528415172/AnsiballZ_stat.py'
Jan 27 22:36:15 compute-0 sudo[202724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:15 compute-0 python3.9[202726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:36:15 compute-0 sudo[202724]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:16 compute-0 sudo[202802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwahqtoueeqzgimlaeqvcgzqjdboxytf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553375.3463428-954-62747528415172/AnsiballZ_file.py'
Jan 27 22:36:16 compute-0 sudo[202802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:16 compute-0 python3.9[202804]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.zj1ds_h0 recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:16 compute-0 sudo[202802]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:16 compute-0 podman[202928]: 2026-01-27 22:36:16.825549555 +0000 UTC m=+0.140372655 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 22:36:16 compute-0 python3.9[202969]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:18 compute-0 sudo[203401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwtigogvuekyfrnzoewrtnxulpdviken ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553378.254878-991-231000195458419/AnsiballZ_container_config_data.py'
Jan 27 22:36:18 compute-0 sudo[203401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:18 compute-0 python3.9[203403]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Jan 27 22:36:18 compute-0 sudo[203401]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:19 compute-0 sudo[203553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzxptijmqsajzhwtyqachbraeairghfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553379.1159554-1002-249320030779320/AnsiballZ_container_config_hash.py'
Jan 27 22:36:19 compute-0 sudo[203553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:19 compute-0 python3.9[203555]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 22:36:19 compute-0 sudo[203553]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:20 compute-0 sudo[203705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlnbdvygibznqfisqdzznhhqpghlvmhr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553379.9335911-1012-118225807509738/AnsiballZ_edpm_container_manage.py'
Jan 27 22:36:20 compute-0 sudo[203705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:20 compute-0 python3[203707]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 22:36:22 compute-0 podman[203720]: 2026-01-27 22:36:22.742434448 +0000 UTC m=+2.193381448 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 27 22:36:22 compute-0 podman[203814]: 2026-01-27 22:36:22.858418749 +0000 UTC m=+0.045431884 container create b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 22:36:22 compute-0 podman[203814]: 2026-01-27 22:36:22.834264282 +0000 UTC m=+0.021277397 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 27 22:36:22 compute-0 python3[203707]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 27 22:36:22 compute-0 sudo[203705]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:23 compute-0 sudo[204002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lunofnyfanapaamqzqfwdweyeintrqtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553383.1584177-1020-205443018514358/AnsiballZ_stat.py'
Jan 27 22:36:23 compute-0 sudo[204002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:23 compute-0 python3.9[204004]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:36:23 compute-0 sudo[204002]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:24 compute-0 sudo[204166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgswxxjrfjptzmdhsvrhuxomshsqagvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553383.8595817-1029-2153152424902/AnsiballZ_file.py'
Jan 27 22:36:24 compute-0 podman[204130]: 2026-01-27 22:36:24.176498682 +0000 UTC m=+0.066518004 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:36:24 compute-0 sudo[204166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:24 compute-0 python3.9[204179]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:24 compute-0 sudo[204166]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:24 compute-0 sudo[204253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzkxpyddftpgumrgrmdibaevsqhjmbmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553383.8595817-1029-2153152424902/AnsiballZ_stat.py'
Jan 27 22:36:24 compute-0 sudo[204253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:24 compute-0 python3.9[204255]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:36:24 compute-0 sudo[204253]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:25 compute-0 sudo[204404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjquqntlwfeezrpsyxzbhhcifabckrsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553384.8350546-1029-256928218164736/AnsiballZ_copy.py'
Jan 27 22:36:25 compute-0 sudo[204404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:25 compute-0 python3.9[204406]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769553384.8350546-1029-256928218164736/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:25 compute-0 sudo[204404]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:25 compute-0 sudo[204480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfzbjncptbyimwfdxluppbaqcwfgdjar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553384.8350546-1029-256928218164736/AnsiballZ_systemd.py'
Jan 27 22:36:25 compute-0 sudo[204480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:26 compute-0 python3.9[204482]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:36:26 compute-0 systemd[1]: Reloading.
Jan 27 22:36:26 compute-0 systemd-rc-local-generator[204511]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:36:26 compute-0 systemd-sysv-generator[204514]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:36:26 compute-0 sudo[204480]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:26 compute-0 sudo[204591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjzsrjxilybgwindeixfmyxjhlshugfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553384.8350546-1029-256928218164736/AnsiballZ_systemd.py'
Jan 27 22:36:26 compute-0 sudo[204591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:26 compute-0 python3.9[204593]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:36:27 compute-0 systemd[1]: Reloading.
Jan 27 22:36:27 compute-0 systemd-rc-local-generator[204626]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:36:27 compute-0 systemd-sysv-generator[204629]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:36:27 compute-0 systemd[1]: Starting openstack_network_exporter container...
Jan 27 22:36:27 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1cf80578966016e34eb16f1d01bc1bfbc7c1f3b76030379ce2db2d7cc707a6/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 27 22:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1cf80578966016e34eb16f1d01bc1bfbc7c1f3b76030379ce2db2d7cc707a6/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 22:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1cf80578966016e34eb16f1d01bc1bfbc7c1f3b76030379ce2db2d7cc707a6/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 22:36:27 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372.
Jan 27 22:36:27 compute-0 podman[204633]: 2026-01-27 22:36:27.491713444 +0000 UTC m=+0.139321356 container init b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7)
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: INFO    22:36:27 main.go:48: registering *bridge.Collector
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: INFO    22:36:27 main.go:48: registering *coverage.Collector
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: INFO    22:36:27 main.go:48: registering *datapath.Collector
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: INFO    22:36:27 main.go:48: registering *iface.Collector
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: INFO    22:36:27 main.go:48: registering *memory.Collector
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: INFO    22:36:27 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: INFO    22:36:27 main.go:48: registering *ovn.Collector
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: INFO    22:36:27 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: INFO    22:36:27 main.go:48: registering *pmd_perf.Collector
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: INFO    22:36:27 main.go:48: registering *pmd_rxq.Collector
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: INFO    22:36:27 main.go:48: registering *vswitch.Collector
Jan 27 22:36:27 compute-0 openstack_network_exporter[204648]: NOTICE  22:36:27 main.go:76: listening on https://:9105/metrics
Jan 27 22:36:27 compute-0 podman[204633]: 2026-01-27 22:36:27.514105482 +0000 UTC m=+0.161713374 container start b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 22:36:27 compute-0 podman[204633]: openstack_network_exporter
Jan 27 22:36:27 compute-0 systemd[1]: Started openstack_network_exporter container.
Jan 27 22:36:27 compute-0 sudo[204591]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:27 compute-0 podman[204658]: 2026-01-27 22:36:27.596780069 +0000 UTC m=+0.071985738 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 27 22:36:28 compute-0 python3.9[204832]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 22:36:29 compute-0 sudo[204982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlrdnwgitxdyeouumzhuusiyxtiukymg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553388.7536213-1074-42113025745926/AnsiballZ_stat.py'
Jan 27 22:36:29 compute-0 sudo[204982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:29 compute-0 python3.9[204984]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:36:29 compute-0 sudo[204982]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:29 compute-0 sudo[205107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqirqxtajhbqrhqdzbaridciqlwzunew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553388.7536213-1074-42113025745926/AnsiballZ_copy.py'
Jan 27 22:36:29 compute-0 sudo[205107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:29 compute-0 python3.9[205109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553388.7536213-1074-42113025745926/.source.yaml _original_basename=.w77nsvrh follow=False checksum=ddcd4d1f5ba0af58d8998366b0558af2872fe5c7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:29 compute-0 sudo[205107]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:30 compute-0 sudo[205259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqgxuikachevqciglvsdpygfmwskulyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553390.0461676-1089-207732341953363/AnsiballZ_find.py'
Jan 27 22:36:30 compute-0 sudo[205259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:30 compute-0 python3.9[205261]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 22:36:30 compute-0 sudo[205259]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:31 compute-0 sudo[205411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhbzgrnjlirjynmyviqhgxgsjtnsecaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553390.8683586-1099-82743997396479/AnsiballZ_podman_container_info.py'
Jan 27 22:36:31 compute-0 sudo[205411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:31 compute-0 python3.9[205413]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 27 22:36:31 compute-0 sudo[205411]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:32 compute-0 sudo[205576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stvboleuowawqyjrnweygotxfrkwmsma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553391.8250096-1107-247850370416482/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:32 compute-0 sudo[205576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:32 compute-0 python3.9[205578]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:32 compute-0 systemd[1]: Started libpod-conmon-5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16.scope.
Jan 27 22:36:32 compute-0 podman[205579]: 2026-01-27 22:36:32.635184289 +0000 UTC m=+0.086195507 container exec 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 22:36:32 compute-0 podman[205579]: 2026-01-27 22:36:32.665683984 +0000 UTC m=+0.116695182 container exec_died 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 22:36:32 compute-0 systemd[1]: libpod-conmon-5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16.scope: Deactivated successfully.
Jan 27 22:36:32 compute-0 sudo[205576]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:33 compute-0 sudo[205757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwxezcdcvawewoyscyjdmfpmxgpqwycc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553392.8609354-1115-163702875284897/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:33 compute-0 sudo[205757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:33 compute-0 python3.9[205759]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:33 compute-0 systemd[1]: Started libpod-conmon-5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16.scope.
Jan 27 22:36:33 compute-0 podman[205760]: 2026-01-27 22:36:33.368119713 +0000 UTC m=+0.067656968 container exec 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 22:36:33 compute-0 podman[205760]: 2026-01-27 22:36:33.402214588 +0000 UTC m=+0.101751773 container exec_died 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:36:33 compute-0 systemd[1]: libpod-conmon-5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16.scope: Deactivated successfully.
Jan 27 22:36:33 compute-0 sudo[205757]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:33 compute-0 sudo[205940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvsmoyavstxqsmuxtjxxsdvlqvfmrciw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553393.606809-1123-97515224215034/AnsiballZ_file.py'
Jan 27 22:36:33 compute-0 sudo[205940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:34 compute-0 python3.9[205942]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:34 compute-0 sudo[205940]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:34 compute-0 sudo[206092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcllwzngvmydqdclokxjigknaqxqexpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553394.3881469-1132-68258167240487/AnsiballZ_podman_container_info.py'
Jan 27 22:36:34 compute-0 sudo[206092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:34 compute-0 python3.9[206094]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 27 22:36:34 compute-0 sudo[206092]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:35 compute-0 sudo[206257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewbzddlpiaemslrujwgpoumzkkivoggu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553395.117627-1140-122329940373960/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:35 compute-0 sudo[206257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:35 compute-0 python3.9[206259]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:35 compute-0 systemd[1]: Started libpod-conmon-70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc.scope.
Jan 27 22:36:35 compute-0 podman[206260]: 2026-01-27 22:36:35.723429788 +0000 UTC m=+0.105957231 container exec 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 22:36:35 compute-0 podman[206260]: 2026-01-27 22:36:35.758221974 +0000 UTC m=+0.140749427 container exec_died 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 22:36:35 compute-0 systemd[1]: libpod-conmon-70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc.scope: Deactivated successfully.
Jan 27 22:36:35 compute-0 sudo[206257]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:36 compute-0 sudo[206441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoolrypumuhgtgpmljzvxzhaeqrgndzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553396.009095-1148-234936508188210/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:36 compute-0 sudo[206441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:36 compute-0 python3.9[206443]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:36 compute-0 systemd[1]: Started libpod-conmon-70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc.scope.
Jan 27 22:36:36 compute-0 podman[206444]: 2026-01-27 22:36:36.596429008 +0000 UTC m=+0.073389198 container exec 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 22:36:36 compute-0 podman[206444]: 2026-01-27 22:36:36.627602311 +0000 UTC m=+0.104562501 container exec_died 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 22:36:36 compute-0 sudo[206441]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:36 compute-0 systemd[1]: libpod-conmon-70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc.scope: Deactivated successfully.
Jan 27 22:36:37 compute-0 sudo[206625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nudvzcgsfyzvlmillcfslsvofpsryggp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553396.8529754-1156-279688317865423/AnsiballZ_file.py'
Jan 27 22:36:37 compute-0 sudo[206625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:37 compute-0 python3.9[206627]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:37 compute-0 sudo[206625]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:37 compute-0 podman[206628]: 2026-01-27 22:36:37.348793668 +0000 UTC m=+0.052820692 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 22:36:37 compute-0 systemd[1]: 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617-3a283224e5730c2.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 22:36:37 compute-0 systemd[1]: 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617-3a283224e5730c2.service: Failed with result 'exit-code'.
Jan 27 22:36:37 compute-0 sudo[206797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iadiouumcddaqcmcossupklljyutgrbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553397.525643-1165-234654663383799/AnsiballZ_podman_container_info.py'
Jan 27 22:36:37 compute-0 sudo[206797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:38 compute-0 python3.9[206799]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 27 22:36:38 compute-0 sudo[206797]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:38 compute-0 sudo[206962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wynwkmwgfmneijxohzfbsbmuhbasgebr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553398.3836055-1173-150942415369484/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:38 compute-0 sudo[206962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:38 compute-0 python3.9[206964]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:38 compute-0 systemd[1]: Started libpod-conmon-7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617.scope.
Jan 27 22:36:38 compute-0 podman[206965]: 2026-01-27 22:36:38.988315892 +0000 UTC m=+0.093665597 container exec 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 22:36:39 compute-0 podman[206965]: 2026-01-27 22:36:39.026299673 +0000 UTC m=+0.131649338 container exec_died 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 22:36:39 compute-0 systemd[1]: libpod-conmon-7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617.scope: Deactivated successfully.
Jan 27 22:36:39 compute-0 sudo[206962]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:39 compute-0 podman[206998]: 2026-01-27 22:36:39.188157336 +0000 UTC m=+0.072639727 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:36:39 compute-0 sudo[207167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvnebatioogqaypdsunuonutjwnzclhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553399.296112-1181-152116564609840/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:39 compute-0 sudo[207167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:39 compute-0 python3.9[207169]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:39 compute-0 systemd[1]: Started libpod-conmon-7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617.scope.
Jan 27 22:36:39 compute-0 podman[207170]: 2026-01-27 22:36:39.918815939 +0000 UTC m=+0.093274965 container exec 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2)
Jan 27 22:36:39 compute-0 podman[207170]: 2026-01-27 22:36:39.949394206 +0000 UTC m=+0.123853172 container exec_died 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 22:36:39 compute-0 systemd[1]: libpod-conmon-7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617.scope: Deactivated successfully.
Jan 27 22:36:40 compute-0 sudo[207167]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:40 compute-0 podman[207278]: 2026-01-27 22:36:40.391271637 +0000 UTC m=+0.066621363 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:36:40 compute-0 sudo[207375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tagxcplbvqrefoupmkkayvefsywhdibq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553400.175001-1189-21202140921225/AnsiballZ_file.py'
Jan 27 22:36:40 compute-0 sudo[207375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.613 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.646 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.671 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.671 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.671 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.672 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:36:40 compute-0 python3.9[207377]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:40 compute-0 sudo[207375]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.846 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.847 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5828MB free_disk=72.4499397277832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.847 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.847 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.904 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.904 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.932 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.949 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.950 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:36:40 compute-0 nova_compute[185650]: 2026-01-27 22:36:40.950 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:36:41 compute-0 sudo[207527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exssovfyfmbboyvbwjuyyprcrfjiwnyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553400.9392986-1198-6306366889027/AnsiballZ_podman_container_info.py'
Jan 27 22:36:41 compute-0 sudo[207527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.297 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.298 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.298 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.298 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.310 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.310 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.311 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:36:41 compute-0 python3.9[207529]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 27 22:36:41 compute-0 sudo[207527]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:41 compute-0 sudo[207692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyuanvmrpyjpbgthobhycgisetauluud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553401.6392536-1206-122526000722621/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:41 compute-0 sudo[207692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:36:41 compute-0 nova_compute[185650]: 2026-01-27 22:36:41.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:36:42 compute-0 python3.9[207694]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:42 compute-0 systemd[1]: Started libpod-conmon-f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de.scope.
Jan 27 22:36:42 compute-0 podman[207695]: 2026-01-27 22:36:42.266170254 +0000 UTC m=+0.099551777 container exec f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:36:42 compute-0 podman[207695]: 2026-01-27 22:36:42.297592885 +0000 UTC m=+0.130974408 container exec_died f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:36:42 compute-0 sudo[207692]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:42 compute-0 systemd[1]: libpod-conmon-f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de.scope: Deactivated successfully.
Jan 27 22:36:42 compute-0 sudo[207875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wptbkvsbkrlhwdjnppajhqyhfpzkgxux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553402.5081787-1214-112253412892211/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:42 compute-0 sudo[207875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:42 compute-0 python3.9[207877]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:43 compute-0 systemd[1]: Started libpod-conmon-f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de.scope.
Jan 27 22:36:43 compute-0 podman[207878]: 2026-01-27 22:36:43.036184089 +0000 UTC m=+0.066163770 container exec f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 22:36:43 compute-0 podman[207878]: 2026-01-27 22:36:43.067008663 +0000 UTC m=+0.096988344 container exec_died f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:36:43 compute-0 systemd[1]: libpod-conmon-f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de.scope: Deactivated successfully.
Jan 27 22:36:43 compute-0 sudo[207875]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:43 compute-0 sudo[208060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpidkwellrhvonmyltkseixywwatklqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553403.3168192-1222-271122645972136/AnsiballZ_file.py'
Jan 27 22:36:43 compute-0 sudo[208060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:43 compute-0 python3.9[208062]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:43 compute-0 sudo[208060]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:44 compute-0 sudo[208212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agmqvyrhxgcyvawnrpawubnsmfmcrvth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553404.0033627-1231-105617910088814/AnsiballZ_podman_container_info.py'
Jan 27 22:36:44 compute-0 sudo[208212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:44 compute-0 python3.9[208214]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 27 22:36:44 compute-0 sudo[208212]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:44 compute-0 sudo[208377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkqqbnawsnorgnprxszcsystkpzcouyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553404.6810575-1239-157386597447379/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:44 compute-0 sudo[208377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:45 compute-0 python3.9[208379]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:45 compute-0 systemd[1]: Started libpod-conmon-245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b.scope.
Jan 27 22:36:45 compute-0 podman[208380]: 2026-01-27 22:36:45.244373351 +0000 UTC m=+0.065071308 container exec 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:36:45 compute-0 podman[208380]: 2026-01-27 22:36:45.278014086 +0000 UTC m=+0.098712003 container exec_died 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:36:45 compute-0 systemd[1]: libpod-conmon-245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b.scope: Deactivated successfully.
Jan 27 22:36:45 compute-0 sudo[208377]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:45 compute-0 sudo[208560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mawwmxwehaaywqazcgztvazsdxfaenhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553405.5135403-1247-207042613824372/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:45 compute-0 sudo[208560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:46 compute-0 python3.9[208562]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:46 compute-0 systemd[1]: Started libpod-conmon-245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b.scope.
Jan 27 22:36:46 compute-0 podman[208563]: 2026-01-27 22:36:46.112878571 +0000 UTC m=+0.060994720 container exec 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:36:46 compute-0 podman[208563]: 2026-01-27 22:36:46.147068302 +0000 UTC m=+0.095184401 container exec_died 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 22:36:46 compute-0 systemd[1]: libpod-conmon-245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b.scope: Deactivated successfully.
Jan 27 22:36:46 compute-0 sudo[208560]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:46 compute-0 sudo[208745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdjezboppgukvstowxueyiduavnyhudv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553406.3728151-1255-190797033086431/AnsiballZ_file.py'
Jan 27 22:36:46 compute-0 sudo[208745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:46 compute-0 python3.9[208747]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:46 compute-0 sudo[208745]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:47 compute-0 sudo[208914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cavrkqqsrgwurswujzcweouykixkgcbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553407.0601263-1264-219776661567185/AnsiballZ_podman_container_info.py'
Jan 27 22:36:47 compute-0 sudo[208914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:47 compute-0 podman[208871]: 2026-01-27 22:36:47.366456944 +0000 UTC m=+0.088667461 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:36:47 compute-0 python3.9[208920]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 27 22:36:47 compute-0 sudo[208914]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:48 compute-0 sudo[209090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xegqugsaofegbevmtexnmzrpelpvcijy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553407.8003209-1272-60385845201848/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:48 compute-0 sudo[209090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:48 compute-0 python3.9[209092]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:48 compute-0 systemd[1]: Started libpod-conmon-b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372.scope.
Jan 27 22:36:48 compute-0 podman[209093]: 2026-01-27 22:36:48.458195276 +0000 UTC m=+0.068653961 container exec b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, vcs-type=git)
Jan 27 22:36:48 compute-0 podman[209093]: 2026-01-27 22:36:48.492271744 +0000 UTC m=+0.102730429 container exec_died b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=)
Jan 27 22:36:48 compute-0 systemd[1]: libpod-conmon-b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372.scope: Deactivated successfully.
Jan 27 22:36:48 compute-0 sudo[209090]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:48 compute-0 sudo[209272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yimsbbpyqjggmujegbrmusffzrjuqand ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553408.6666994-1280-95287499645966/AnsiballZ_podman_container_exec.py'
Jan 27 22:36:48 compute-0 sudo[209272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:49 compute-0 python3.9[209274]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:36:49 compute-0 systemd[1]: Started libpod-conmon-b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372.scope.
Jan 27 22:36:49 compute-0 podman[209275]: 2026-01-27 22:36:49.22095567 +0000 UTC m=+0.081739310 container exec b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git, version=9.6)
Jan 27 22:36:49 compute-0 podman[209275]: 2026-01-27 22:36:49.256959074 +0000 UTC m=+0.117742674 container exec_died b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vcs-type=git, version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, container_name=openstack_network_exporter, managed_by=edpm_ansible)
Jan 27 22:36:49 compute-0 systemd[1]: libpod-conmon-b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372.scope: Deactivated successfully.
Jan 27 22:36:49 compute-0 sudo[209272]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:49 compute-0 sudo[209456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfwsmudpvimsjnuiihfjgjuidvkbfgbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553409.4753146-1288-104838155036867/AnsiballZ_file.py'
Jan 27 22:36:49 compute-0 sudo[209456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:49 compute-0 python3.9[209458]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:49 compute-0 sudo[209456]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:50 compute-0 sudo[209608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctlaoqdzmljiwdehvpiltimiksgeqsxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553410.1961694-1297-5304259053922/AnsiballZ_file.py'
Jan 27 22:36:50 compute-0 sudo[209608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:50 compute-0 python3.9[209610]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:50 compute-0 sudo[209608]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:51 compute-0 sudo[209760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtukbgdsdxuvdioegdywnvaqzealvlbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553410.9611268-1305-24074464344928/AnsiballZ_stat.py'
Jan 27 22:36:51 compute-0 sudo[209760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:51 compute-0 python3.9[209762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:36:51 compute-0 sudo[209760]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:51 compute-0 sudo[209883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyaqfvqlxmayeyhofwfhkilchfifbvqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553410.9611268-1305-24074464344928/AnsiballZ_copy.py'
Jan 27 22:36:51 compute-0 sudo[209883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:51 compute-0 python3.9[209885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553410.9611268-1305-24074464344928/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:51 compute-0 sudo[209883]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:52 compute-0 sudo[210035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvalgbbzlpayzoisfvvoqxwjijcmnfmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553412.1744766-1321-93376218680631/AnsiballZ_file.py'
Jan 27 22:36:52 compute-0 sudo[210035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:52 compute-0 python3.9[210037]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:52 compute-0 sudo[210035]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:53 compute-0 sudo[210187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfalqrgjzqvsfsxfrgjdozwqicniggez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553412.8416471-1329-134349389585605/AnsiballZ_stat.py'
Jan 27 22:36:53 compute-0 sudo[210187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:53 compute-0 python3.9[210189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:36:53 compute-0 sudo[210187]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:53 compute-0 sudo[210265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkjilonhdwwdlknqxbkafurnjjnmxlql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553412.8416471-1329-134349389585605/AnsiballZ_file.py'
Jan 27 22:36:53 compute-0 sudo[210265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:53 compute-0 python3.9[210267]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:53 compute-0 sudo[210265]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:54 compute-0 sudo[210426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etygeprfqnwegtjnsbpchvdzaebmfeez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553414.0100555-1341-198449864008361/AnsiballZ_stat.py'
Jan 27 22:36:54 compute-0 sudo[210426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:54 compute-0 podman[210391]: 2026-01-27 22:36:54.315452681 +0000 UTC m=+0.070421192 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:36:54 compute-0 python3.9[210435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:36:54 compute-0 sudo[210426]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:54 compute-0 sudo[210518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqvcuqtsyaoxgougcjypuobtnxyugmhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553414.0100555-1341-198449864008361/AnsiballZ_file.py'
Jan 27 22:36:54 compute-0 sudo[210518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:54 compute-0 python3.9[210520]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8bbnuso3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:55 compute-0 sudo[210518]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:55 compute-0 sudo[210670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-appsyxvaixyfqzuseudfdgsvkegszpxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553415.157769-1353-28651013914176/AnsiballZ_stat.py'
Jan 27 22:36:55 compute-0 sudo[210670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:55 compute-0 python3.9[210672]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:36:55 compute-0 sudo[210670]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:56 compute-0 sudo[210748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bczuxwizweuktjardbrxykdrmlmfmkhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553415.157769-1353-28651013914176/AnsiballZ_file.py'
Jan 27 22:36:56 compute-0 sudo[210748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:56 compute-0 python3.9[210750]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:56 compute-0 sudo[210748]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:56 compute-0 sudo[210900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dycafzyiwgjjuseqpodovunxugnmcbeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553416.4854286-1366-147948152294054/AnsiballZ_command.py'
Jan 27 22:36:56 compute-0 sudo[210900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:56 compute-0 python3.9[210902]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:36:56 compute-0 sudo[210900]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:57 compute-0 sudo[211053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oywbkopesodgajjvkebfkwcjucweiemx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553417.1313484-1374-127163578535350/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 22:36:57 compute-0 sudo[211053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:57 compute-0 python3[211055]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 22:36:57 compute-0 sudo[211053]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:58 compute-0 podman[211132]: 2026-01-27 22:36:58.369140217 +0000 UTC m=+0.071608467 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6)
Jan 27 22:36:58 compute-0 sudo[211224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjwnpgypbxjoxlxqetpgqzzlocbtxlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553417.9946299-1382-59588769335780/AnsiballZ_stat.py'
Jan 27 22:36:58 compute-0 sudo[211224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:58 compute-0 python3.9[211226]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:36:58 compute-0 sudo[211224]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:59 compute-0 sudo[211302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xogyuryqnobefsqeturjglumpiymfyyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553417.9946299-1382-59588769335780/AnsiballZ_file.py'
Jan 27 22:36:59 compute-0 sudo[211302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:59 compute-0 python3.9[211304]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:36:59 compute-0 sudo[211302]: pam_unix(sudo:session): session closed for user root
Jan 27 22:36:59 compute-0 sudo[211454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzkiohxfenkkckazewpqfccfuxsuaome ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553419.4550905-1394-248201987930689/AnsiballZ_stat.py'
Jan 27 22:36:59 compute-0 sudo[211454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:36:59 compute-0 python3.9[211456]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:36:59 compute-0 sudo[211454]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:00 compute-0 sudo[211532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udlbbxmjzairoaztoqmunpocpbbnhaqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553419.4550905-1394-248201987930689/AnsiballZ_file.py'
Jan 27 22:37:00 compute-0 sudo[211532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:00 compute-0 python3.9[211534]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:00 compute-0 sudo[211532]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:00 compute-0 sudo[211684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dadfkuqsyoerdvzvweciobekcwaykvsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553420.5511167-1406-212362903489593/AnsiballZ_stat.py'
Jan 27 22:37:00 compute-0 sudo[211684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:00 compute-0 python3.9[211686]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:01 compute-0 sudo[211684]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:01 compute-0 sudo[211762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwfotdodncjibecuilpjlfykmkfpaeka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553420.5511167-1406-212362903489593/AnsiballZ_file.py'
Jan 27 22:37:01 compute-0 sudo[211762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:01 compute-0 python3.9[211764]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:01 compute-0 sudo[211762]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:01 compute-0 sudo[211914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyjyjdzkcdrycsuhjimepoeglfudbtxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553421.571792-1418-163510936378952/AnsiballZ_stat.py'
Jan 27 22:37:01 compute-0 sudo[211914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:02 compute-0 python3.9[211916]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:02 compute-0 sudo[211914]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:02 compute-0 sudo[211992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agqdjqqzmunwxjgvgcetmaxumeohmrab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553421.571792-1418-163510936378952/AnsiballZ_file.py'
Jan 27 22:37:02 compute-0 sudo[211992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:02 compute-0 python3.9[211994]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:02 compute-0 sudo[211992]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:03 compute-0 sudo[212144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scjohlutrwvrcpfpredlypfuwstrpzum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553422.8266108-1430-54066470409894/AnsiballZ_stat.py'
Jan 27 22:37:03 compute-0 sudo[212144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:03 compute-0 python3.9[212146]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:03 compute-0 sudo[212144]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:03 compute-0 sudo[212269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okmkixlwcejudhsroofkiaphtilmvlgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553422.8266108-1430-54066470409894/AnsiballZ_copy.py'
Jan 27 22:37:03 compute-0 sudo[212269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:03 compute-0 python3.9[212271]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553422.8266108-1430-54066470409894/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:03 compute-0 sudo[212269]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:37:04.122 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:37:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:37:04.123 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:37:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:37:04.123 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:37:04 compute-0 sudo[212421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tftqulcjturejotokswbibouxfxqnlpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553424.1279612-1445-27241109320950/AnsiballZ_file.py'
Jan 27 22:37:04 compute-0 sudo[212421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:04 compute-0 python3.9[212423]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:04 compute-0 sudo[212421]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:04 compute-0 sudo[212573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgjfpwdyxoahzxfqplqdxmxjcihcysmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553424.7216458-1453-18287131520346/AnsiballZ_command.py'
Jan 27 22:37:04 compute-0 sudo[212573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:05 compute-0 python3.9[212575]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:37:05 compute-0 sudo[212573]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:05 compute-0 sudo[212728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqjzvxfewodznffyjqnoacyacvhyoxef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553425.3335624-1461-99874456616686/AnsiballZ_blockinfile.py'
Jan 27 22:37:05 compute-0 sudo[212728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:06 compute-0 python3.9[212730]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:06 compute-0 sudo[212728]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:06 compute-0 sudo[212880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bttqxpxwwbyfgnjaoczckhbbggctzlkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553426.2884297-1470-169015715087374/AnsiballZ_command.py'
Jan 27 22:37:06 compute-0 sudo[212880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:06 compute-0 python3.9[212882]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:37:07 compute-0 sudo[212880]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:07 compute-0 sudo[213042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojpdxlzalraxpbngzhcuqlzwyszskhcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553427.2163916-1478-231737087174630/AnsiballZ_stat.py'
Jan 27 22:37:07 compute-0 sudo[213042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:07 compute-0 podman[213007]: 2026-01-27 22:37:07.567223401 +0000 UTC m=+0.074108239 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2)
Jan 27 22:37:07 compute-0 python3.9[213046]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:37:07 compute-0 sudo[213042]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:08 compute-0 sudo[213205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgbjwktzkewvlzewdlicqrrlxwzzlgah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553427.944518-1486-118031796304401/AnsiballZ_command.py'
Jan 27 22:37:08 compute-0 sudo[213205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:08 compute-0 python3.9[213207]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:37:08 compute-0 sudo[213205]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:08 compute-0 openstack_network_exporter[204648]: ERROR   22:37:08 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:37:08 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:37:08 compute-0 openstack_network_exporter[204648]: ERROR   22:37:08 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:37:08 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:37:08 compute-0 sudo[213365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gciwwmjdhjnhandvbwdkbadaaqqbfagr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553428.6578686-1494-196411382503715/AnsiballZ_file.py'
Jan 27 22:37:08 compute-0 sudo[213365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:09 compute-0 python3.9[213367]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:09 compute-0 sudo[213365]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:09 compute-0 podman[213392]: 2026-01-27 22:37:09.356606809 +0000 UTC m=+0.061980328 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 27 22:37:09 compute-0 sshd-session[185993]: Connection closed by 192.168.122.30 port 60750
Jan 27 22:37:09 compute-0 sshd-session[185990]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:37:09 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 27 22:37:09 compute-0 systemd[1]: session-25.scope: Consumed 1min 49.234s CPU time.
Jan 27 22:37:09 compute-0 systemd-logind[789]: Session 25 logged out. Waiting for processes to exit.
Jan 27 22:37:09 compute-0 systemd-logind[789]: Removed session 25.
Jan 27 22:37:11 compute-0 podman[213411]: 2026-01-27 22:37:11.340556228 +0000 UTC m=+0.046302153 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 22:37:16 compute-0 sshd-session[213435]: Accepted publickey for zuul from 192.168.122.30 port 53754 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:37:16 compute-0 systemd-logind[789]: New session 26 of user zuul.
Jan 27 22:37:16 compute-0 systemd[1]: Started Session 26 of User zuul.
Jan 27 22:37:16 compute-0 sshd-session[213435]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:37:16 compute-0 sudo[213588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfdqnyehpuawptaczwuslocondscrgsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553436.1775308-19-29695272376998/AnsiballZ_systemd_service.py'
Jan 27 22:37:16 compute-0 sudo[213588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:17 compute-0 python3.9[213590]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:37:17 compute-0 systemd[1]: Reloading.
Jan 27 22:37:17 compute-0 systemd-rc-local-generator[213614]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:37:17 compute-0 systemd-sysv-generator[213618]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:37:17 compute-0 sudo[213588]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:17 compute-0 podman[213626]: 2026-01-27 22:37:17.628536671 +0000 UTC m=+0.125965703 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 22:37:18 compute-0 python3.9[213802]: ansible-ansible.builtin.service_facts Invoked
Jan 27 22:37:18 compute-0 network[213819]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 27 22:37:18 compute-0 network[213820]: 'network-scripts' will be removed from distribution in near future.
Jan 27 22:37:18 compute-0 network[213821]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 27 22:37:22 compute-0 sudo[214092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjxgfvomdxmcgdyohcthbzmoaxoomnqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553442.4980118-42-22516747529341/AnsiballZ_systemd_service.py'
Jan 27 22:37:22 compute-0 sudo[214092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:23 compute-0 python3.9[214094]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:37:23 compute-0 sudo[214092]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:23 compute-0 sudo[214246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odfzpnjznfdpnbppwvhfedtajwllueqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553443.387257-52-103412442092424/AnsiballZ_file.py'
Jan 27 22:37:23 compute-0 sudo[214246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:23 compute-0 python3.9[214248]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:23 compute-0 sudo[214246]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:24 compute-0 sudo[214398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eipudleopwakqvbnjoqxewiowfngmuvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553444.1212683-60-204196315767014/AnsiballZ_file.py'
Jan 27 22:37:24 compute-0 sudo[214398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:24 compute-0 podman[214400]: 2026-01-27 22:37:24.427729696 +0000 UTC m=+0.049758824 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 22:37:24 compute-0 python3.9[214401]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:24 compute-0 sudo[214398]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:25 compute-0 sudo[214575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gelmzxugrseexjrnwmkcgbvidsqhacdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553444.8178062-69-115654894444431/AnsiballZ_command.py'
Jan 27 22:37:25 compute-0 sudo[214575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:25 compute-0 python3.9[214577]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:37:25 compute-0 sudo[214575]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:26 compute-0 python3.9[214729]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 22:37:26 compute-0 sudo[214879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iloduqonlhpdgvqvtutuwiwvogyidacc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553446.5201843-87-262592577121707/AnsiballZ_systemd_service.py'
Jan 27 22:37:26 compute-0 sudo[214879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:27 compute-0 python3.9[214881]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:37:27 compute-0 systemd[1]: Reloading.
Jan 27 22:37:27 compute-0 systemd-sysv-generator[214910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:37:27 compute-0 systemd-rc-local-generator[214906]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:37:27 compute-0 sudo[214879]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:27 compute-0 sudo[215065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loryxjramgqejdwxdedxkqpalgwribph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553447.5777442-95-213406080091556/AnsiballZ_command.py'
Jan 27 22:37:27 compute-0 sudo[215065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:28 compute-0 python3.9[215067]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:37:28 compute-0 sudo[215065]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:28 compute-0 sudo[215230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqjtdlwpocswngylsdlmaxbnedebuylb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553448.3160381-104-62946218067569/AnsiballZ_file.py'
Jan 27 22:37:28 compute-0 sudo[215230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:28 compute-0 podman[215192]: 2026-01-27 22:37:28.635633052 +0000 UTC m=+0.071521744 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, version=9.6, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter)
Jan 27 22:37:28 compute-0 python3.9[215236]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:37:28 compute-0 sudo[215230]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:29 compute-0 python3.9[215390]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:37:29 compute-0 podman[201529]: time="2026-01-27T22:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:37:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21257 "" "Go-http-client/1.1"
Jan 27 22:37:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2993 "" "Go-http-client/1.1"
Jan 27 22:37:30 compute-0 python3.9[215544]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:30 compute-0 python3.9[215665]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553449.8794677-120-279658936125420/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:37:31 compute-0 openstack_network_exporter[204648]: ERROR   22:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:37:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:37:31 compute-0 openstack_network_exporter[204648]: ERROR   22:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:37:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:37:31 compute-0 python3.9[215815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:32 compute-0 python3.9[215936]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553451.124381-135-186530300019109/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:37:32 compute-0 sudo[216086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llfgzpvkwrfwnelamhahxoyawrehvmgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553452.311002-153-41596702859571/AnsiballZ_getent.py'
Jan 27 22:37:32 compute-0 sudo[216086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:32 compute-0 python3.9[216088]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 27 22:37:32 compute-0 sudo[216086]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:34 compute-0 python3.9[216239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:34 compute-0 python3.9[216360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769553453.6875513-181-38531730052113/.source.conf _original_basename=ceilometer.conf follow=False checksum=f817847bb0474d7c55a7ad9afdea5f1400a30720 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:35 compute-0 python3.9[216510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:35 compute-0 python3.9[216631]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769553454.760824-181-9047408831867/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:36 compute-0 python3.9[216781]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:36 compute-0 python3.9[216902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769553455.9259002-181-216518551101823/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:37 compute-0 python3.9[217052]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:37:37 compute-0 podman[217055]: 2026-01-27 22:37:37.841414918 +0000 UTC m=+0.056556820 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.099 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.100 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826a03dd90>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:37:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:37:38 compute-0 python3.9[217226]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:37:39 compute-0 python3.9[217378]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:39 compute-0 podman[217473]: 2026-01-27 22:37:39.669497299 +0000 UTC m=+0.114617504 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 22:37:39 compute-0 python3.9[217516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553458.5088987-240-29672025844722/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:40 compute-0 sudo[217669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulvrtrqxruypmdpwuqkttvrcftyqouik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553459.9596891-255-1263997146244/AnsiballZ_file.py'
Jan 27 22:37:40 compute-0 sudo[217669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:40 compute-0 python3.9[217671]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:40 compute-0 sudo[217669]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:40 compute-0 sudo[217821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-morrwhhbwdxkklpgprreevspxwhjzdlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553460.6212034-263-20689896530465/AnsiballZ_file.py'
Jan 27 22:37:40 compute-0 sudo[217821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:40 compute-0 nova_compute[185650]: 2026-01-27 22:37:40.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.018 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.019 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.019 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.019 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:37:41 compute-0 python3.9[217823]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:41 compute-0 sudo[217821]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.166 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.167 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5841MB free_disk=72.47957611083984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.167 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.168 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.217 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.217 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.235 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.247 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.248 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:37:41 compute-0 nova_compute[185650]: 2026-01-27 22:37:41.248 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:37:41 compute-0 sudo[217983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qotgzelrlrlclkegskcxioodeerkzkig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553461.3274143-271-108126677865229/AnsiballZ_file.py'
Jan 27 22:37:41 compute-0 sudo[217983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:41 compute-0 podman[217947]: 2026-01-27 22:37:41.650041259 +0000 UTC m=+0.073422239 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:37:41 compute-0 python3.9[217988]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:37:41 compute-0 sudo[217983]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:42 compute-0 nova_compute[185650]: 2026-01-27 22:37:42.243 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:37:42 compute-0 nova_compute[185650]: 2026-01-27 22:37:42.243 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:37:42 compute-0 nova_compute[185650]: 2026-01-27 22:37:42.243 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:37:42 compute-0 nova_compute[185650]: 2026-01-27 22:37:42.244 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:37:42 compute-0 nova_compute[185650]: 2026-01-27 22:37:42.256 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 22:37:42 compute-0 sudo[218150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zawwfqxakmopndseppfrlxsicnsxczso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553462.042179-279-35365624762916/AnsiballZ_stat.py'
Jan 27 22:37:42 compute-0 sudo[218150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:42 compute-0 python3.9[218152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:42 compute-0 sudo[218150]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:42 compute-0 sudo[218273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcgqwiuazzqbrgriysqietgkigjgskgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553462.042179-279-35365624762916/AnsiballZ_copy.py'
Jan 27 22:37:42 compute-0 sudo[218273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:42 compute-0 nova_compute[185650]: 2026-01-27 22:37:42.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:37:42 compute-0 nova_compute[185650]: 2026-01-27 22:37:42.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:37:43 compute-0 python3.9[218275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553462.042179-279-35365624762916/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:37:43 compute-0 sudo[218273]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:43 compute-0 sudo[218349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebyesssuwpdyekdhocwpctnedufswsho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553462.042179-279-35365624762916/AnsiballZ_stat.py'
Jan 27 22:37:43 compute-0 sudo[218349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:43 compute-0 python3.9[218351]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:43 compute-0 sudo[218349]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:43 compute-0 sudo[218472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htuwhktacclaxbxperwfibtreyuxvtbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553462.042179-279-35365624762916/AnsiballZ_copy.py'
Jan 27 22:37:43 compute-0 sudo[218472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:43 compute-0 python3.9[218474]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553462.042179-279-35365624762916/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:37:43 compute-0 sudo[218472]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:43 compute-0 nova_compute[185650]: 2026-01-27 22:37:43.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:37:43 compute-0 nova_compute[185650]: 2026-01-27 22:37:43.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:37:43 compute-0 nova_compute[185650]: 2026-01-27 22:37:43.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:37:43 compute-0 nova_compute[185650]: 2026-01-27 22:37:43.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:37:43 compute-0 nova_compute[185650]: 2026-01-27 22:37:43.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:37:44 compute-0 sudo[218624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzhdxwclszubiweyddxqzjhreqgrbhvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553464.1424093-279-2563058972772/AnsiballZ_stat.py'
Jan 27 22:37:44 compute-0 sudo[218624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:44 compute-0 python3.9[218626]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:44 compute-0 sudo[218624]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:44 compute-0 sudo[218747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfizpesqqojxywyidkpaopyjnjzghohh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553464.1424093-279-2563058972772/AnsiballZ_copy.py'
Jan 27 22:37:44 compute-0 sudo[218747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:45 compute-0 python3.9[218749]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769553464.1424093-279-2563058972772/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:37:45 compute-0 sudo[218747]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:45 compute-0 sudo[218899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yezbncksjtoznikuintckwwjacachhfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553465.3874373-321-212309939150202/AnsiballZ_file.py'
Jan 27 22:37:45 compute-0 sudo[218899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:45 compute-0 python3.9[218901]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:45 compute-0 sudo[218899]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:46 compute-0 sudo[219051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpyciayupbudvpzvzmjgcfmmfkkheryv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553466.0284522-329-40983800591248/AnsiballZ_file.py'
Jan 27 22:37:46 compute-0 sudo[219051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:46 compute-0 python3.9[219053]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:37:46 compute-0 sudo[219051]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:46 compute-0 sudo[219203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbygivkfwwssoulmfxomjaadetrcpxhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553466.6747198-337-123291511248352/AnsiballZ_stat.py'
Jan 27 22:37:46 compute-0 sudo[219203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:47 compute-0 python3.9[219205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:47 compute-0 sudo[219203]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:47 compute-0 sudo[219326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruopttdczkwakojbvfwsuwyoaiitjieh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553466.6747198-337-123291511248352/AnsiballZ_copy.py'
Jan 27 22:37:47 compute-0 sudo[219326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:47 compute-0 python3.9[219328]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553466.6747198-337-123291511248352/.source.json _original_basename=.c7e1wly2 follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:47 compute-0 sudo[219326]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:48 compute-0 podman[219452]: 2026-01-27 22:37:48.059390396 +0000 UTC m=+0.072674530 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 22:37:48 compute-0 python3.9[219493]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:50 compute-0 sudo[219926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzjhjpsdadzfxujokoopismrhpgrniiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553469.7660365-377-272790202512552/AnsiballZ_container_config_data.py'
Jan 27 22:37:50 compute-0 sudo[219926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:50 compute-0 rsyslogd[1003]: imjournal: 695 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 27 22:37:50 compute-0 python3.9[219928]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Jan 27 22:37:50 compute-0 sudo[219926]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:51 compute-0 sudo[220078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsqdlxvqghwagnlxwnvdspebxwnqeyff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553470.779674-388-26750538044418/AnsiballZ_container_config_hash.py'
Jan 27 22:37:51 compute-0 sudo[220078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:51 compute-0 python3.9[220080]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 22:37:51 compute-0 sudo[220078]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:52 compute-0 sudo[220230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rljxiiirmxeafkvcjsaatgiroihkppej ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553471.7259753-398-230515395479065/AnsiballZ_edpm_container_manage.py'
Jan 27 22:37:52 compute-0 sudo[220230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:52 compute-0 python3[220232]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 22:37:52 compute-0 podman[220271]: 2026-01-27 22:37:52.73109999 +0000 UTC m=+0.061380766 container create d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 27 22:37:52 compute-0 podman[220271]: 2026-01-27 22:37:52.696129552 +0000 UTC m=+0.026410428 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 27 22:37:52 compute-0 python3[220232]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Jan 27 22:37:52 compute-0 sudo[220230]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:53 compute-0 sudo[220460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpdpcfpxfscqvzktyxhrmwkmjpgbwybh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553473.0199037-406-23659043364025/AnsiballZ_stat.py'
Jan 27 22:37:53 compute-0 sudo[220460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:53 compute-0 python3.9[220462]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:37:53 compute-0 sudo[220460]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:53 compute-0 sudo[220614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvfehhgbrnyzhvgfbjnvevszzceilhal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553473.7436764-415-11186456391604/AnsiballZ_file.py'
Jan 27 22:37:53 compute-0 sudo[220614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:54 compute-0 python3.9[220616]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:54 compute-0 sudo[220614]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:54 compute-0 sudo[220690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwlpsfmtletpmwuyjyegigjdazlatpvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553473.7436764-415-11186456391604/AnsiballZ_stat.py'
Jan 27 22:37:54 compute-0 sudo[220690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:54 compute-0 podman[220692]: 2026-01-27 22:37:54.528842971 +0000 UTC m=+0.052896696 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:37:54 compute-0 python3.9[220693]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:37:54 compute-0 sudo[220690]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:55 compute-0 sudo[220865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfpempicrecewqwcaanicuprezszofum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553474.7342832-415-252063971709508/AnsiballZ_copy.py'
Jan 27 22:37:55 compute-0 sudo[220865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:55 compute-0 python3.9[220867]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769553474.7342832-415-252063971709508/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:37:55 compute-0 sudo[220865]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:55 compute-0 sudo[220941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahxpfexudpztsjcyzckaxnxavjuopred ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553474.7342832-415-252063971709508/AnsiballZ_systemd.py'
Jan 27 22:37:56 compute-0 sudo[220941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:56 compute-0 python3.9[220943]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:37:56 compute-0 systemd[1]: Reloading.
Jan 27 22:37:56 compute-0 systemd-rc-local-generator[220971]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:37:56 compute-0 systemd-sysv-generator[220974]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:37:56 compute-0 sudo[220941]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:56 compute-0 sudo[221052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afxwchnqrlxmhdwjddmiovfbmitvamvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553474.7342832-415-252063971709508/AnsiballZ_systemd.py'
Jan 27 22:37:56 compute-0 sudo[221052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:57 compute-0 python3.9[221054]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:37:57 compute-0 systemd[1]: Reloading.
Jan 27 22:37:57 compute-0 systemd-sysv-generator[221086]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:37:57 compute-0 systemd-rc-local-generator[221078]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:37:57 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 27 22:37:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8171292e4315a3c719ba5edc8ef0660b8864de27b29cd832095f918fc4d093/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 22:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8171292e4315a3c719ba5edc8ef0660b8864de27b29cd832095f918fc4d093/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 22:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8171292e4315a3c719ba5edc8ef0660b8864de27b29cd832095f918fc4d093/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 27 22:37:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8171292e4315a3c719ba5edc8ef0660b8864de27b29cd832095f918fc4d093/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 27 22:37:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236.
Jan 27 22:37:57 compute-0 podman[221095]: 2026-01-27 22:37:57.631372938 +0000 UTC m=+0.152744912 container init d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: + sudo -E kolla_set_configs
Jan 27 22:37:57 compute-0 sudo[221116]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 27 22:37:57 compute-0 sudo[221116]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 22:37:57 compute-0 sudo[221116]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 22:37:57 compute-0 podman[221095]: 2026-01-27 22:37:57.675383861 +0000 UTC m=+0.196755835 container start d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 22:37:57 compute-0 podman[221095]: ceilometer_agent_ipmi
Jan 27 22:37:57 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Validating config file
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Copying service configuration files
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: INFO:__main__:Writing out command to execute
Jan 27 22:37:57 compute-0 sudo[221052]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:57 compute-0 sudo[221116]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: ++ cat /run_command
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: + ARGS=
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: + sudo kolla_copy_cacerts
Jan 27 22:37:57 compute-0 sudo[221134]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 27 22:37:57 compute-0 sudo[221134]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 22:37:57 compute-0 sudo[221134]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 22:37:57 compute-0 sudo[221134]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: + [[ ! -n '' ]]
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: + . kolla_extend_start
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: + umask 0022
Jan 27 22:37:57 compute-0 ceilometer_agent_ipmi[221110]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 27 22:37:57 compute-0 podman[221117]: 2026-01-27 22:37:57.771400528 +0000 UTC m=+0.086193642 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:37:57 compute-0 systemd[1]: d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236-1e9debe477195ca2.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 22:37:57 compute-0 systemd[1]: d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236-1e9debe477195ca2.service: Failed with result 'exit-code'.
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.592 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.592 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.593 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.593 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.593 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.593 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.593 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.593 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.594 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.594 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.594 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.594 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.594 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.595 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.595 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.595 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.595 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.595 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.596 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.596 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.596 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.596 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.596 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.596 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.597 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.597 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.597 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.597 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.597 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.597 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.598 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.598 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.598 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.598 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.598 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.598 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.598 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.599 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.599 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.599 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.599 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.599 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.600 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.600 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.600 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.600 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.600 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.600 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.601 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.601 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.601 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.601 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.601 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.601 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.602 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.602 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.602 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.602 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.602 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.602 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.603 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.603 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.603 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.603 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.603 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.603 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.604 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.604 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.604 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.604 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.604 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.605 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.605 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.605 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.605 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.605 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.605 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.606 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.606 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.606 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.606 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.606 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.606 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.607 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.607 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.607 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.607 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.607 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.607 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.608 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.608 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.608 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.608 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.608 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.608 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.609 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.609 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.609 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.609 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.609 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.609 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.610 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.610 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.610 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.610 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.610 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.610 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.611 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.611 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.611 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.611 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.611 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.612 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.612 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.612 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.612 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.612 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.612 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.613 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.613 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.613 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.613 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.613 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.613 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.614 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.614 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.614 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.614 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.614 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.614 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.615 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.615 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.615 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.615 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.615 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.615 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.616 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.616 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.616 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.616 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.616 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.616 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.617 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.617 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.617 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.617 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.617 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.617 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.618 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.618 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.618 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.618 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.618 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.618 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.619 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.619 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.619 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.619 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.619 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.619 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.620 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.620 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.620 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.620 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.620 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.637 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.638 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.639 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 27 22:37:58 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:58.737 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpcf8tz5vr/privsep.sock']
Jan 27 22:37:58 compute-0 sudo[221297]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpcf8tz5vr/privsep.sock
Jan 27 22:37:58 compute-0 sudo[221297]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 22:37:58 compute-0 sudo[221297]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 22:37:58 compute-0 python3.9[221292]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 22:37:59 compute-0 podman[221377]: 2026-01-27 22:37:59.355424492 +0000 UTC m=+0.053913092 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.buildah.version=1.33.7, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 22:37:59 compute-0 sudo[221297]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.390 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.391 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpcf8tz5vr/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.280 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.284 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.285 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.286 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 27 22:37:59 compute-0 sudo[221474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujntfnywdfcjxylkgizdcjhuhyuzccpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553479.2283468-460-69693994830912/AnsiballZ_stat.py'
Jan 27 22:37:59 compute-0 sudo[221474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.493 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.493 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.494 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.494 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.494 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.494 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.494 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.495 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.495 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.495 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.495 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.495 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.495 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.498 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.498 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.498 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.498 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.498 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.498 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.498 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.498 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.499 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.499 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.499 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.499 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.499 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.499 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.499 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.499 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.500 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.500 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.500 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.500 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.500 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.500 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.500 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.501 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.501 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.501 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.501 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.501 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.501 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.501 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.501 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.502 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.502 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.502 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.502 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.502 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.502 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.502 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.502 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.503 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.503 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.503 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.503 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.503 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.503 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.503 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.504 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.504 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.504 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.504 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.504 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.504 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.504 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.504 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.505 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.505 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.505 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.505 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.505 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.505 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.505 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.505 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.506 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.506 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.506 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.506 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.506 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.506 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.506 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.506 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.506 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.507 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.507 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.507 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.507 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.507 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.507 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.507 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.507 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.508 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.508 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.508 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.508 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.508 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.508 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.508 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.508 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.508 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.509 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.509 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.509 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.509 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.509 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.509 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.509 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.509 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.509 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.510 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.510 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.510 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.510 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.510 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.510 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.510 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.511 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.511 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.511 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.511 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.511 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.511 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.511 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.511 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.511 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.512 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.512 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.512 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.512 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.512 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.512 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.512 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.512 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.513 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.513 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.513 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.513 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.513 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.513 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.513 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.513 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.513 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.514 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.514 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.514 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.514 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.514 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.514 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.514 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.514 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.515 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.515 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.515 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.515 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.515 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.515 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.515 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.515 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.515 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.516 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.516 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.516 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.516 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.516 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.516 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.516 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.516 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.516 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.517 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.517 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.517 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.517 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.517 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.517 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.517 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.517 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.517 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.518 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.518 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.518 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.518 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.518 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.518 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.518 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.518 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.518 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.519 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.519 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.519 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.519 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.519 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.519 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.519 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.519 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.520 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.520 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.520 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.520 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.520 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.520 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.520 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.520 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.520 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.521 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.521 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.521 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.521 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.521 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.521 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.521 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.521 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.522 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.522 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.522 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.522 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 27 22:37:59 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:37:59.524 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 27 22:37:59 compute-0 python3.9[221476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:37:59 compute-0 sudo[221474]: pam_unix(sudo:session): session closed for user root
Jan 27 22:37:59 compute-0 podman[201529]: time="2026-01-27T22:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:37:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 24317 "" "Go-http-client/1.1"
Jan 27 22:37:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3420 "" "Go-http-client/1.1"
Jan 27 22:37:59 compute-0 sudo[221601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfijyinzmuxccluplekijwxirrxfeiys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553479.2283468-460-69693994830912/AnsiballZ_copy.py'
Jan 27 22:37:59 compute-0 sudo[221601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:00 compute-0 python3.9[221603]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553479.2283468-460-69693994830912/.source.yaml _original_basename=.t_jyrqdj follow=False checksum=9a0bf93d339a58f9c3a6481455f8e547324d0a9e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:00 compute-0 sudo[221601]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:00 compute-0 sudo[221753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtwvjxdjhqxsngozypkqncxojxawhquy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553480.4675088-477-114470100842038/AnsiballZ_file.py'
Jan 27 22:38:00 compute-0 sudo[221753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:00 compute-0 python3.9[221755]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:00 compute-0 sudo[221753]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:01 compute-0 sudo[221905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etcfgmnjtaphjqkqkwqbesssrgpmkwsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553481.0673726-485-192573146490561/AnsiballZ_file.py'
Jan 27 22:38:01 compute-0 sudo[221905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:01 compute-0 openstack_network_exporter[204648]: ERROR   22:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:38:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:38:01 compute-0 openstack_network_exporter[204648]: ERROR   22:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:38:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:38:01 compute-0 python3.9[221907]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 27 22:38:01 compute-0 sudo[221905]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:02 compute-0 python3.9[222057]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:03 compute-0 sudo[222478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hunlibnircgmosshviblqrjjuxzpsaib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553483.7314596-519-145184197602838/AnsiballZ_container_config_data.py'
Jan 27 22:38:03 compute-0 sudo[222478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:38:04.123 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:38:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:38:04.124 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:38:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:38:04.124 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:38:04 compute-0 python3.9[222480]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Jan 27 22:38:04 compute-0 sudo[222478]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:04 compute-0 sudo[222630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-likeoejvkkqijfvvvruszxvoztniffpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553484.5439196-530-268448288287071/AnsiballZ_container_config_hash.py'
Jan 27 22:38:04 compute-0 sudo[222630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:05 compute-0 python3.9[222632]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 27 22:38:05 compute-0 sudo[222630]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:05 compute-0 sudo[222782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztskudxpolauirykkglgitxsmecndnon ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553485.351656-540-119397727896991/AnsiballZ_edpm_container_manage.py'
Jan 27 22:38:05 compute-0 sudo[222782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:05 compute-0 python3[222784]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Jan 27 22:38:06 compute-0 podman[222817]: 2026-01-27 22:38:06.162939295 +0000 UTC m=+0.053188624 container create 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.component=ubi9-container, version=9.4, config_id=kepler, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, container_name=kepler, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 22:38:06 compute-0 podman[222817]: 2026-01-27 22:38:06.136325113 +0000 UTC m=+0.026574482 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 27 22:38:06 compute-0 python3[222784]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Jan 27 22:38:06 compute-0 sudo[222782]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:06 compute-0 sudo[223005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exkmdemoihucjizqpkmngsytxithgcsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553486.4720292-548-96856956580594/AnsiballZ_stat.py'
Jan 27 22:38:06 compute-0 sudo[223005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:06 compute-0 python3.9[223007]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:38:07 compute-0 sudo[223005]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:07 compute-0 sudo[223159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmdxewrgxifjzjencfrtjwiwqgbhxyin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553487.4121518-557-106978789219828/AnsiballZ_file.py'
Jan 27 22:38:07 compute-0 sudo[223159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:07 compute-0 python3.9[223161]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:07 compute-0 sudo[223159]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:08 compute-0 sudo[223250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzwtfsxbvwobwpnxfcuehooszbaecmff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553487.4121518-557-106978789219828/AnsiballZ_stat.py'
Jan 27 22:38:08 compute-0 sudo[223250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:08 compute-0 podman[223209]: 2026-01-27 22:38:08.05551761 +0000 UTC m=+0.054839327 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 22:38:08 compute-0 python3.9[223257]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:38:08 compute-0 sudo[223250]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:08 compute-0 sudo[223407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naqnakzbjcaqfnniedavrzfaitcdkamn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553488.2762737-557-105991466048731/AnsiballZ_copy.py'
Jan 27 22:38:08 compute-0 sudo[223407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:08 compute-0 python3.9[223409]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769553488.2762737-557-105991466048731/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:08 compute-0 sudo[223407]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:09 compute-0 sudo[223483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipiruoixyjzjhcwqalfwwspcnbvzliot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553488.2762737-557-105991466048731/AnsiballZ_systemd.py'
Jan 27 22:38:09 compute-0 sudo[223483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:09 compute-0 python3.9[223485]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 27 22:38:09 compute-0 systemd[1]: Reloading.
Jan 27 22:38:09 compute-0 systemd-rc-local-generator[223508]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:38:09 compute-0 systemd-sysv-generator[223512]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:38:09 compute-0 sudo[223483]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:09 compute-0 podman[223520]: 2026-01-27 22:38:09.780039096 +0000 UTC m=+0.054492137 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 22:38:09 compute-0 sudo[223612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhupxxpmttvgcjctnpolvducmcfbuyjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553488.2762737-557-105991466048731/AnsiballZ_systemd.py'
Jan 27 22:38:09 compute-0 sudo[223612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:10 compute-0 python3.9[223614]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 27 22:38:10 compute-0 systemd[1]: Reloading.
Jan 27 22:38:10 compute-0 systemd-rc-local-generator[223643]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 27 22:38:10 compute-0 systemd-sysv-generator[223646]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 27 22:38:10 compute-0 systemd[1]: Starting kepler container...
Jan 27 22:38:10 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:38:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650.
Jan 27 22:38:10 compute-0 podman[223654]: 2026-01-27 22:38:10.769006023 +0000 UTC m=+0.122836934 container init 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public)
Jan 27 22:38:10 compute-0 kepler[223669]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 22:38:10 compute-0 podman[223654]: 2026-01-27 22:38:10.792609327 +0000 UTC m=+0.146440218 container start 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, maintainer=Red Hat, Inc., container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release-0.7.12=, version=9.4, io.openshift.tags=base rhel9, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 22:38:10 compute-0 podman[223654]: kepler
Jan 27 22:38:10 compute-0 kepler[223669]: I0127 22:38:10.798108       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 27 22:38:10 compute-0 kepler[223669]: I0127 22:38:10.798218       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 27 22:38:10 compute-0 kepler[223669]: I0127 22:38:10.798236       1 config.go:295] kernel version: 5.14
Jan 27 22:38:10 compute-0 kepler[223669]: I0127 22:38:10.798713       1 power.go:78] Unable to obtain power, use estimate method
Jan 27 22:38:10 compute-0 kepler[223669]: I0127 22:38:10.798729       1 redfish.go:169] failed to get redfish credential file path
Jan 27 22:38:10 compute-0 kepler[223669]: I0127 22:38:10.799032       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 27 22:38:10 compute-0 kepler[223669]: I0127 22:38:10.799037       1 power.go:79] using none to obtain power
Jan 27 22:38:10 compute-0 kepler[223669]: E0127 22:38:10.799048       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 27 22:38:10 compute-0 kepler[223669]: E0127 22:38:10.799064       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 27 22:38:10 compute-0 kepler[223669]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 22:38:10 compute-0 kepler[223669]: I0127 22:38:10.800446       1 exporter.go:84] Number of CPUs: 8
Jan 27 22:38:10 compute-0 systemd[1]: Started kepler container.
Jan 27 22:38:10 compute-0 sudo[223612]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:10 compute-0 podman[223679]: 2026-01-27 22:38:10.863323655 +0000 UTC m=+0.062137806 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, config_id=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-type=git, version=9.4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., release=1214.1726694543, io.openshift.tags=base rhel9, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 27 22:38:10 compute-0 systemd[1]: 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650-7f5f537b291afdab.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 22:38:10 compute-0 systemd[1]: 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650-7f5f537b291afdab.service: Failed with result 'exit-code'.
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.358235       1 watcher.go:83] Using in cluster k8s config
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.358285       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 27 22:38:11 compute-0 kepler[223669]: E0127 22:38:11.358341       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.362599       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.362642       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.367379       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.367417       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.377618       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.377662       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.377678       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.385616       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.385658       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.385665       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.385670       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.385677       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.385690       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.385820       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.385849       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.385899       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.385921       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.386037       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 27 22:38:11 compute-0 kepler[223669]: I0127 22:38:11.386670       1 exporter.go:208] Started Kepler in 588.735274ms
Jan 27 22:38:11 compute-0 python3.9[223851]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 27 22:38:12 compute-0 sudo[224027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcxckjcmihnopaijxmgrcrwhroylvutq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553492.0139291-602-245182859320003/AnsiballZ_stat.py'
Jan 27 22:38:12 compute-0 podman[223985]: 2026-01-27 22:38:12.368568882 +0000 UTC m=+0.070516314 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:38:12 compute-0 sudo[224027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:12 compute-0 python3.9[224036]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:38:12 compute-0 sudo[224027]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:12 compute-0 sudo[224159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxlphnqpvlehvrphdeqiyyiwwdqdcoxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553492.0139291-602-245182859320003/AnsiballZ_copy.py'
Jan 27 22:38:12 compute-0 sudo[224159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:13 compute-0 python3.9[224161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553492.0139291-602-245182859320003/.source.yaml _original_basename=.7i3stefr follow=False checksum=305b5cb8ed0bbe35bf5a346aaebf8cb6a3d7f99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:13 compute-0 sudo[224159]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:13 compute-0 sudo[224311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcahnvjbhowkjtklrqkoxzsourgpofvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553493.3962839-617-82954456245216/AnsiballZ_systemd.py'
Jan 27 22:38:13 compute-0 sudo[224311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:14 compute-0 python3.9[224313]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:38:14 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:38:14.183 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:38:14.284 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:38:14.285 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:38:14.285 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[221110]: 2026-01-27 22:38:14.294 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Jan 27 22:38:14 compute-0 systemd[1]: libpod-d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236.scope: Deactivated successfully.
Jan 27 22:38:14 compute-0 systemd[1]: libpod-d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236.scope: Consumed 2.143s CPU time.
Jan 27 22:38:14 compute-0 podman[224317]: 2026-01-27 22:38:14.468383544 +0000 UTC m=+0.369705621 container died d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:38:14 compute-0 systemd[1]: d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236-1e9debe477195ca2.timer: Deactivated successfully.
Jan 27 22:38:14 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236.
Jan 27 22:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236-userdata-shm.mount: Deactivated successfully.
Jan 27 22:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c8171292e4315a3c719ba5edc8ef0660b8864de27b29cd832095f918fc4d093-merged.mount: Deactivated successfully.
Jan 27 22:38:14 compute-0 podman[224317]: 2026-01-27 22:38:14.528878896 +0000 UTC m=+0.430200973 container cleanup d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:38:14 compute-0 podman[224317]: ceilometer_agent_ipmi
Jan 27 22:38:14 compute-0 podman[224348]: ceilometer_agent_ipmi
Jan 27 22:38:14 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Jan 27 22:38:14 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Jan 27 22:38:14 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 27 22:38:14 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8171292e4315a3c719ba5edc8ef0660b8864de27b29cd832095f918fc4d093/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 27 22:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8171292e4315a3c719ba5edc8ef0660b8864de27b29cd832095f918fc4d093/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 27 22:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8171292e4315a3c719ba5edc8ef0660b8864de27b29cd832095f918fc4d093/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 27 22:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8171292e4315a3c719ba5edc8ef0660b8864de27b29cd832095f918fc4d093/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 27 22:38:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236.
Jan 27 22:38:14 compute-0 podman[224359]: 2026-01-27 22:38:14.785831316 +0000 UTC m=+0.155439231 container init d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: + sudo -E kolla_set_configs
Jan 27 22:38:14 compute-0 podman[224359]: 2026-01-27 22:38:14.816223146 +0000 UTC m=+0.185831071 container start d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:38:14 compute-0 sudo[224380]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 27 22:38:14 compute-0 sudo[224380]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 22:38:14 compute-0 sudo[224380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 22:38:14 compute-0 podman[224359]: ceilometer_agent_ipmi
Jan 27 22:38:14 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 27 22:38:14 compute-0 sudo[224311]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Validating config file
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Copying service configuration files
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: INFO:__main__:Writing out command to execute
Jan 27 22:38:14 compute-0 sudo[224380]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: ++ cat /run_command
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: + ARGS=
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: + sudo kolla_copy_cacerts
Jan 27 22:38:14 compute-0 podman[224381]: 2026-01-27 22:38:14.913034723 +0000 UTC m=+0.085799402 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 27 22:38:14 compute-0 sudo[224403]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 27 22:38:14 compute-0 systemd[1]: d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236-c24eab88f2dc29e.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 22:38:14 compute-0 sudo[224403]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 22:38:14 compute-0 sudo[224403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 22:38:14 compute-0 systemd[1]: d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236-c24eab88f2dc29e.service: Failed with result 'exit-code'.
Jan 27 22:38:14 compute-0 sudo[224403]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: + [[ ! -n '' ]]
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: + . kolla_extend_start
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: + umask 0022
Jan 27 22:38:14 compute-0 ceilometer_agent_ipmi[224374]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 27 22:38:15 compute-0 sudo[224553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqlxywsevetsbezllcuxmoefwzqjhnpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553495.0596795-625-84940350551158/AnsiballZ_systemd.py'
Jan 27 22:38:15 compute-0 sudo[224553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:15 compute-0 python3.9[224555]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.784 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.785 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.785 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.785 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.785 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.785 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.785 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.785 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.785 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.785 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.786 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.787 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.787 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.788 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.788 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.788 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.789 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.789 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.789 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.790 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.790 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.790 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.791 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.791 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.791 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.792 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.792 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.792 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.793 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.793 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.793 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.794 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.794 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.794 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.794 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.795 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.796 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.796 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.796 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.797 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.797 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.797 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.797 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.798 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.798 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.798 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.799 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.799 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.799 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.800 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.800 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.800 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.801 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.801 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.801 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.801 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.802 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.802 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.802 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.802 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.803 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.803 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.803 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.803 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.803 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.803 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.804 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.804 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.805 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.805 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.805 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.805 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.806 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.806 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.806 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.806 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.807 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.807 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.807 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.808 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.808 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.808 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.808 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.808 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.808 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.808 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.808 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.809 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.809 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.809 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.809 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.809 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.809 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.809 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.810 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.810 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.810 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.810 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.810 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.810 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.810 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.811 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.811 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.811 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.811 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.811 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.811 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.811 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.811 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.811 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.812 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.812 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.812 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.812 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.812 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.812 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.812 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.812 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.812 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.812 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.813 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.813 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 systemd[1]: Stopping kepler container...
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.813 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.813 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.813 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.813 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.813 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.813 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.813 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.813 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.813 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.814 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.815 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.815 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.815 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.815 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.815 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.815 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.815 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.815 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.815 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.815 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.816 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.816 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.816 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.816 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.816 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.816 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.816 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.816 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.816 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.816 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.816 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.817 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.834 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.836 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.837 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 27 22:38:15 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:15.849 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp1tobvpvc/privsep.sock']
Jan 27 22:38:15 compute-0 sudo[224573]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp1tobvpvc/privsep.sock
Jan 27 22:38:15 compute-0 sudo[224573]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 27 22:38:15 compute-0 sudo[224573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 27 22:38:15 compute-0 kepler[223669]: I0127 22:38:15.874799       1 exporter.go:218] Received shutdown signal
Jan 27 22:38:15 compute-0 kepler[223669]: I0127 22:38:15.876555       1 exporter.go:226] Exiting...
Jan 27 22:38:16 compute-0 systemd[1]: libpod-0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650.scope: Deactivated successfully.
Jan 27 22:38:16 compute-0 podman[224559]: 2026-01-27 22:38:16.090106429 +0000 UTC m=+0.263605323 container died 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, name=ubi9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.buildah.version=1.29.0, release-0.7.12=, vcs-type=git, container_name=kepler)
Jan 27 22:38:16 compute-0 systemd[1]: 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650-7f5f537b291afdab.timer: Deactivated successfully.
Jan 27 22:38:16 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650.
Jan 27 22:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650-userdata-shm.mount: Deactivated successfully.
Jan 27 22:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0a6e0f3fd51ce87d77d8ce8dc6c8c58dfb7b2c9707b1a33883b204cc5dffcad-merged.mount: Deactivated successfully.
Jan 27 22:38:16 compute-0 podman[224559]: 2026-01-27 22:38:16.135403206 +0000 UTC m=+0.308902110 container cleanup 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0)
Jan 27 22:38:16 compute-0 podman[224559]: kepler
Jan 27 22:38:16 compute-0 podman[224592]: kepler
Jan 27 22:38:16 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Jan 27 22:38:16 compute-0 systemd[1]: Stopped kepler container.
Jan 27 22:38:16 compute-0 systemd[1]: Starting kepler container...
Jan 27 22:38:16 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:38:16 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650.
Jan 27 22:38:16 compute-0 podman[224603]: 2026-01-27 22:38:16.357591902 +0000 UTC m=+0.121474429 container init 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, io.openshift.expose-services=, version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 22:38:16 compute-0 kepler[224618]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 22:38:16 compute-0 podman[224603]: 2026-01-27 22:38:16.381036982 +0000 UTC m=+0.144919489 container start 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, container_name=kepler, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.383043       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.383152       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.383166       1 config.go:295] kernel version: 5.14
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.383602       1 power.go:78] Unable to obtain power, use estimate method
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.383623       1 redfish.go:169] failed to get redfish credential file path
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.383962       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.383974       1 power.go:79] using none to obtain power
Jan 27 22:38:16 compute-0 kepler[224618]: E0127 22:38:16.383985       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 27 22:38:16 compute-0 kepler[224618]: E0127 22:38:16.384003       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 27 22:38:16 compute-0 podman[224603]: kepler
Jan 27 22:38:16 compute-0 kepler[224618]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.385621       1 exporter.go:84] Number of CPUs: 8
Jan 27 22:38:16 compute-0 systemd[1]: Started kepler container.
Jan 27 22:38:16 compute-0 sudo[224553]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:16 compute-0 podman[224628]: 2026-01-27 22:38:16.46292231 +0000 UTC m=+0.070494873 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, name=ubi9)
Jan 27 22:38:16 compute-0 systemd[1]: 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650-40b8f2d1c40c5722.service: Main process exited, code=exited, status=1/FAILURE
Jan 27 22:38:16 compute-0 systemd[1]: 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650-40b8f2d1c40c5722.service: Failed with result 'exit-code'.
Jan 27 22:38:16 compute-0 sudo[224573]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.507 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.508 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1tobvpvc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.393 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.399 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.402 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.402 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.617 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.617 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.618 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.619 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.619 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.619 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.619 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.619 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.619 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.619 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.619 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.620 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.620 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.623 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.623 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.623 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.623 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.624 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.624 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.624 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.624 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.624 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.624 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.624 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.624 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.624 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.625 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.625 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.625 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.625 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.625 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.625 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.626 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.626 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.626 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.626 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.626 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.626 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.626 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.626 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.626 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.627 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.628 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.628 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.628 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.628 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.628 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.628 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.628 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.628 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.628 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.628 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.629 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.629 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.629 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.629 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.629 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.629 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.629 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.629 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.629 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.629 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.630 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.630 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.630 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.630 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.630 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.630 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.630 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.630 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.630 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.630 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.630 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.631 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.632 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.632 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.632 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.632 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.632 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.632 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.632 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.632 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.632 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.632 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.632 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.633 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.633 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.633 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.633 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.633 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.633 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.633 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.633 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.633 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.633 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.633 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.634 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.634 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.634 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.634 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.634 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.634 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.634 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.634 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.634 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.634 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.634 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.635 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.635 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.635 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.635 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.635 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.635 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.635 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.635 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.635 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.635 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.635 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.636 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.636 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.636 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.636 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.636 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.636 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.636 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.636 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.636 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.636 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.636 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.637 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.638 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.638 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.638 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.638 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.638 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.638 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.638 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.638 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.638 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.638 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.638 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.639 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.639 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.639 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.639 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.639 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.641 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.642 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.642 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.643 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 27 22:38:16 compute-0 ceilometer_agent_ipmi[224374]: 2026-01-27 22:38:16.645 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 27 22:38:16 compute-0 sudo[224805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjgkxupyhycivunbdfdwdljnkcbwkmaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553496.58435-633-240351842755298/AnsiballZ_find.py'
Jan 27 22:38:16 compute-0 sudo[224805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.918452       1 watcher.go:83] Using in cluster k8s config
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.918544       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 27 22:38:16 compute-0 kepler[224618]: E0127 22:38:16.918623       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.940946       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.940990       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.948942       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.948978       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.957000       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.957038       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.957053       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.975792       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.975847       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.975856       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.975865       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.975875       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.975897       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.976017       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.976065       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.976133       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.976160       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.976378       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 27 22:38:16 compute-0 kepler[224618]: I0127 22:38:16.976939       1 exporter.go:208] Started Kepler in 594.092453ms
Jan 27 22:38:16 compute-0 python3.9[224807]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 27 22:38:17 compute-0 sudo[224805]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:18 compute-0 sudo[224967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjdkhjhmlgxezalnhfhtituzoilvqxdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553497.499546-643-39847072070454/AnsiballZ_podman_container_info.py'
Jan 27 22:38:18 compute-0 sudo[224967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:18 compute-0 python3.9[224969]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 27 22:38:18 compute-0 sudo[224967]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:18 compute-0 podman[224982]: 2026-01-27 22:38:18.447736062 +0000 UTC m=+0.144543947 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 22:38:19 compute-0 sudo[225154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xphdovzlpmhjjednrrqfqimvwpdlhwsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553498.5002844-651-132377851475592/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:19 compute-0 sudo[225154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:19 compute-0 python3.9[225156]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:19 compute-0 systemd[1]: Started libpod-conmon-5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16.scope.
Jan 27 22:38:19 compute-0 podman[225157]: 2026-01-27 22:38:19.530813686 +0000 UTC m=+0.152998618 container exec 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 27 22:38:19 compute-0 podman[225157]: 2026-01-27 22:38:19.567785617 +0000 UTC m=+0.189970509 container exec_died 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 22:38:19 compute-0 systemd[1]: libpod-conmon-5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16.scope: Deactivated successfully.
Jan 27 22:38:19 compute-0 sudo[225154]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:20 compute-0 sudo[225335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlbjinhzckosqmylyfzllphnpyyzpyqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553499.8286557-659-215615729650963/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:20 compute-0 sudo[225335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:20 compute-0 python3.9[225337]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:20 compute-0 systemd[1]: Started libpod-conmon-5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16.scope.
Jan 27 22:38:20 compute-0 podman[225338]: 2026-01-27 22:38:20.550340047 +0000 UTC m=+0.120362189 container exec 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 22:38:20 compute-0 podman[225338]: 2026-01-27 22:38:20.582929735 +0000 UTC m=+0.152951777 container exec_died 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:38:20 compute-0 systemd[1]: libpod-conmon-5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16.scope: Deactivated successfully.
Jan 27 22:38:20 compute-0 sudo[225335]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:21 compute-0 sudo[225516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gevbepjkgtwenazyvhmglvvqavwljjsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553500.8075852-667-22701200292440/AnsiballZ_file.py'
Jan 27 22:38:21 compute-0 sudo[225516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:21 compute-0 python3.9[225518]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:21 compute-0 sudo[225516]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:21 compute-0 sudo[225668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnrdgjrzubcaddzcowkowayfjzevcqnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553501.6851969-676-261438656526892/AnsiballZ_podman_container_info.py'
Jan 27 22:38:22 compute-0 sudo[225668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:22 compute-0 python3.9[225670]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 27 22:38:22 compute-0 sudo[225668]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:22 compute-0 sudo[225833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyeywvmuizxkeccvkfjktnbohlnhukso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553502.5326912-684-39811103324005/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:22 compute-0 sudo[225833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:23 compute-0 python3.9[225835]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:23 compute-0 systemd[1]: Started libpod-conmon-70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc.scope.
Jan 27 22:38:23 compute-0 podman[225836]: 2026-01-27 22:38:23.305595896 +0000 UTC m=+0.147716200 container exec 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 22:38:23 compute-0 podman[225836]: 2026-01-27 22:38:23.33686693 +0000 UTC m=+0.178987234 container exec_died 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 27 22:38:23 compute-0 systemd[1]: libpod-conmon-70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc.scope: Deactivated successfully.
Jan 27 22:38:23 compute-0 sudo[225833]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:24 compute-0 sudo[226016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oglsbdjwoheqddqakitcuzlwaqhistxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553503.6225429-692-84595346589019/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:24 compute-0 sudo[226016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:24 compute-0 python3.9[226018]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:24 compute-0 systemd[1]: Started libpod-conmon-70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc.scope.
Jan 27 22:38:24 compute-0 podman[226019]: 2026-01-27 22:38:24.357834458 +0000 UTC m=+0.117092645 container exec 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:38:24 compute-0 podman[226019]: 2026-01-27 22:38:24.395085776 +0000 UTC m=+0.154343953 container exec_died 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 27 22:38:24 compute-0 systemd[1]: libpod-conmon-70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc.scope: Deactivated successfully.
Jan 27 22:38:24 compute-0 sudo[226016]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:25 compute-0 sudo[226212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfwmnmkckhfxtqpnsefrddhdvdwargsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553504.6443603-700-15964055237879/AnsiballZ_file.py'
Jan 27 22:38:25 compute-0 sudo[226212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:25 compute-0 podman[226174]: 2026-01-27 22:38:25.023362108 +0000 UTC m=+0.080382160 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:38:25 compute-0 python3.9[226218]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:25 compute-0 sudo[226212]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:25 compute-0 sudo[226375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivvzlmejhnwaobkqnuvomfyyaitrkodz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553505.448419-709-224260120677765/AnsiballZ_podman_container_info.py'
Jan 27 22:38:25 compute-0 sudo[226375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:25 compute-0 python3.9[226377]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 27 22:38:26 compute-0 sudo[226375]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:26 compute-0 sudo[226539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcjkumafurfureechcjgqvxmlyrgxkry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553506.260107-717-78425811942984/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:26 compute-0 sudo[226539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:26 compute-0 python3.9[226541]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:26 compute-0 systemd[1]: Started libpod-conmon-7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617.scope.
Jan 27 22:38:26 compute-0 podman[226542]: 2026-01-27 22:38:26.931904828 +0000 UTC m=+0.086522530 container exec 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 27 22:38:26 compute-0 podman[226542]: 2026-01-27 22:38:26.963995642 +0000 UTC m=+0.118613334 container exec_died 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 22:38:26 compute-0 systemd[1]: libpod-conmon-7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617.scope: Deactivated successfully.
Jan 27 22:38:27 compute-0 sudo[226539]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:27 compute-0 sudo[226720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttirfxzvdflzvcuqkavqjkapdrpvnbqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553507.2031884-725-179444488518113/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:27 compute-0 sudo[226720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:27 compute-0 python3.9[226722]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:27 compute-0 systemd[1]: Started libpod-conmon-7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617.scope.
Jan 27 22:38:27 compute-0 podman[226723]: 2026-01-27 22:38:27.933126463 +0000 UTC m=+0.117906966 container exec 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Jan 27 22:38:27 compute-0 podman[226723]: 2026-01-27 22:38:27.983293087 +0000 UTC m=+0.168073570 container exec_died 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 22:38:28 compute-0 systemd[1]: libpod-conmon-7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617.scope: Deactivated successfully.
Jan 27 22:38:28 compute-0 sudo[226720]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:28 compute-0 sudo[226903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhfmbdwnjcjxduvnpqjqekyrztxkyloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553508.2136137-733-143489750116425/AnsiballZ_file.py'
Jan 27 22:38:28 compute-0 sudo[226903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:28 compute-0 python3.9[226905]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:28 compute-0 sudo[226903]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:29 compute-0 sudo[227058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tscutifhlxzqqomqetebkjqcgsyzcmht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553509.0411308-742-92271488292811/AnsiballZ_podman_container_info.py'
Jan 27 22:38:29 compute-0 sudo[227058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:29 compute-0 podman[227060]: 2026-01-27 22:38:29.543458102 +0000 UTC m=+0.084699993 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, release=1755695350, build-date=2025-08-20T13:12:41)
Jan 27 22:38:29 compute-0 python3.9[227061]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 27 22:38:29 compute-0 sudo[227058]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:29 compute-0 podman[201529]: time="2026-01-27T22:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:38:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27277 "" "Go-http-client/1.1"
Jan 27 22:38:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3844 "" "Go-http-client/1.1"
Jan 27 22:38:30 compute-0 sudo[227241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkdysspyheqhzvvyuayfptrfijkagzrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553509.8874376-750-71811466292601/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:30 compute-0 sudo[227241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:30 compute-0 python3.9[227243]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:30 compute-0 systemd[1]: Started libpod-conmon-f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de.scope.
Jan 27 22:38:30 compute-0 podman[227244]: 2026-01-27 22:38:30.642075989 +0000 UTC m=+0.196127969 container exec f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:38:30 compute-0 podman[227244]: 2026-01-27 22:38:30.675502178 +0000 UTC m=+0.229554188 container exec_died f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:38:30 compute-0 sudo[227241]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:30 compute-0 systemd[1]: libpod-conmon-f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de.scope: Deactivated successfully.
Jan 27 22:38:31 compute-0 sudo[227423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hykxntttyyffoaqlhtgtujoeivvefkir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553510.920602-758-194140718247905/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:31 compute-0 sudo[227423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:31 compute-0 openstack_network_exporter[204648]: ERROR   22:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:38:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:38:31 compute-0 openstack_network_exporter[204648]: ERROR   22:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:38:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:38:31 compute-0 python3.9[227425]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:31 compute-0 systemd[1]: Started libpod-conmon-f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de.scope.
Jan 27 22:38:31 compute-0 podman[227426]: 2026-01-27 22:38:31.594188288 +0000 UTC m=+0.114380825 container exec f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:38:31 compute-0 podman[227426]: 2026-01-27 22:38:31.631372034 +0000 UTC m=+0.151564571 container exec_died f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:38:31 compute-0 systemd[1]: libpod-conmon-f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de.scope: Deactivated successfully.
Jan 27 22:38:31 compute-0 sudo[227423]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:32 compute-0 sudo[227607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghraymogkvrehvmjkofqwdmkvenvgkvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553511.884413-766-206554867277600/AnsiballZ_file.py'
Jan 27 22:38:32 compute-0 sudo[227607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:32 compute-0 python3.9[227609]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:32 compute-0 sudo[227607]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:33 compute-0 sudo[227759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fntyiullyzmjlluucgdfmlzdxzoniylp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553512.778205-775-213460356863189/AnsiballZ_podman_container_info.py'
Jan 27 22:38:33 compute-0 sudo[227759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:33 compute-0 python3.9[227761]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 27 22:38:33 compute-0 sudo[227759]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:34 compute-0 sudo[227924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apddqqcpwgcfsxxyjnbrkjhlqthnuxlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553513.7302527-783-468656128180/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:34 compute-0 sudo[227924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:34 compute-0 python3.9[227926]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:34 compute-0 systemd[1]: Started libpod-conmon-245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b.scope.
Jan 27 22:38:34 compute-0 podman[227927]: 2026-01-27 22:38:34.426434019 +0000 UTC m=+0.118282046 container exec 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:38:34 compute-0 podman[227945]: 2026-01-27 22:38:34.495664639 +0000 UTC m=+0.056579472 container exec_died 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 22:38:34 compute-0 podman[227927]: 2026-01-27 22:38:34.502303761 +0000 UTC m=+0.194151808 container exec_died 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 22:38:34 compute-0 systemd[1]: libpod-conmon-245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b.scope: Deactivated successfully.
Jan 27 22:38:34 compute-0 sudo[227924]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:35 compute-0 sudo[228105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svonnumpomovlaigwooptitxevzipaxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553514.7614305-791-229633229382814/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:35 compute-0 sudo[228105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:35 compute-0 python3.9[228107]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:35 compute-0 systemd[1]: Started libpod-conmon-245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b.scope.
Jan 27 22:38:35 compute-0 podman[228108]: 2026-01-27 22:38:35.422812229 +0000 UTC m=+0.093683837 container exec 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:38:35 compute-0 podman[228108]: 2026-01-27 22:38:35.453762503 +0000 UTC m=+0.124634091 container exec_died 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:38:35 compute-0 systemd[1]: libpod-conmon-245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b.scope: Deactivated successfully.
Jan 27 22:38:35 compute-0 sudo[228105]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:36 compute-0 sudo[228286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohuvtgnyujxymownjwrqwogzaiawplzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553515.692911-799-121416999831470/AnsiballZ_file.py'
Jan 27 22:38:36 compute-0 sudo[228286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:36 compute-0 python3.9[228288]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:36 compute-0 sudo[228286]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:36 compute-0 sudo[228438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stlkyfqlrgsvqbocpevxvlzvwmtgxuzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553516.480306-808-105624943920850/AnsiballZ_podman_container_info.py'
Jan 27 22:38:36 compute-0 sudo[228438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:37 compute-0 python3.9[228440]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 27 22:38:37 compute-0 sudo[228438]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:37 compute-0 sudo[228603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxvzmotydbuizeliavlivydtnbbiktdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553517.3867612-816-244786725254194/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:37 compute-0 sudo[228603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:37 compute-0 python3.9[228605]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:38 compute-0 systemd[1]: Started libpod-conmon-b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372.scope.
Jan 27 22:38:38 compute-0 podman[228606]: 2026-01-27 22:38:38.017951176 +0000 UTC m=+0.094164918 container exec b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 22:38:38 compute-0 podman[228606]: 2026-01-27 22:38:38.049065615 +0000 UTC m=+0.125279337 container exec_died b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 22:38:38 compute-0 systemd[1]: libpod-conmon-b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372.scope: Deactivated successfully.
Jan 27 22:38:38 compute-0 sudo[228603]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:38 compute-0 podman[228639]: 2026-01-27 22:38:38.16809576 +0000 UTC m=+0.067955388 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Jan 27 22:38:38 compute-0 sudo[228808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdlcdqztsbucemxcmrpdrjskqipxmwjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553518.2690318-824-139961957088641/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:38 compute-0 sudo[228808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:38 compute-0 python3.9[228810]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:38 compute-0 systemd[1]: Started libpod-conmon-b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372.scope.
Jan 27 22:38:38 compute-0 podman[228811]: 2026-01-27 22:38:38.936668157 +0000 UTC m=+0.105207725 container exec b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41)
Jan 27 22:38:39 compute-0 podman[228831]: 2026-01-27 22:38:39.01526741 +0000 UTC m=+0.061639373 container exec_died b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, release=1755695350)
Jan 27 22:38:39 compute-0 podman[228811]: 2026-01-27 22:38:39.032682614 +0000 UTC m=+0.201222202 container exec_died b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 22:38:39 compute-0 systemd[1]: libpod-conmon-b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372.scope: Deactivated successfully.
Jan 27 22:38:39 compute-0 sudo[228808]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:39 compute-0 sudo[228993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffvfvwfwfbowtxnmpbzqjbovdursdwdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553519.2815888-832-119882590983896/AnsiballZ_file.py'
Jan 27 22:38:39 compute-0 sudo[228993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:39 compute-0 python3.9[228995]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:39 compute-0 sudo[228993]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:39 compute-0 nova_compute[185650]: 2026-01-27 22:38:39.989 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:38:40 compute-0 podman[229078]: 2026-01-27 22:38:40.365906968 +0000 UTC m=+0.065435191 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 27 22:38:40 compute-0 sudo[229162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwwthzqxonntofpnrqllwubtgaubblyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553520.1508837-841-39889424494260/AnsiballZ_podman_container_info.py'
Jan 27 22:38:40 compute-0 sudo[229162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:40 compute-0 python3.9[229164]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Jan 27 22:38:40 compute-0 sudo[229162]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:40 compute-0 nova_compute[185650]: 2026-01-27 22:38:40.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.022 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.023 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.024 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.024 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.334 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.335 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5676MB free_disk=72.47676849365234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.335 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.335 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:38:41 compute-0 sudo[229326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njfxqgbcccnvfuipldfhaztgsomkwhjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553521.0172498-849-47510173775535/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:41 compute-0 sudo[229326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.389 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.390 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.416 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.427 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.429 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:38:41 compute-0 nova_compute[185650]: 2026-01-27 22:38:41.429 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:38:41 compute-0 python3.9[229328]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:41 compute-0 systemd[1]: Started libpod-conmon-d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236.scope.
Jan 27 22:38:41 compute-0 podman[229329]: 2026-01-27 22:38:41.75822991 +0000 UTC m=+0.141320354 container exec d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 22:38:41 compute-0 podman[229329]: 2026-01-27 22:38:41.797046139 +0000 UTC m=+0.180136493 container exec_died d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 22:38:41 compute-0 sudo[229326]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:41 compute-0 systemd[1]: libpod-conmon-d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236.scope: Deactivated successfully.
Jan 27 22:38:42 compute-0 sudo[229508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cooaczynfmvasgnatbipwdahjvxrxllm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553522.0651748-857-21125957237988/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:42 compute-0 sudo[229508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:42 compute-0 podman[229510]: 2026-01-27 22:38:42.511842119 +0000 UTC m=+0.076011206 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:38:42 compute-0 python3.9[229511]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:42 compute-0 systemd[1]: Started libpod-conmon-d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236.scope.
Jan 27 22:38:42 compute-0 podman[229534]: 2026-01-27 22:38:42.766885949 +0000 UTC m=+0.116279253 container exec d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 27 22:38:42 compute-0 podman[229534]: 2026-01-27 22:38:42.80076994 +0000 UTC m=+0.150163214 container exec_died d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 22:38:42 compute-0 systemd[1]: libpod-conmon-d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236.scope: Deactivated successfully.
Jan 27 22:38:42 compute-0 sudo[229508]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:43 compute-0 nova_compute[185650]: 2026-01-27 22:38:43.423 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:38:43 compute-0 nova_compute[185650]: 2026-01-27 22:38:43.424 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:38:43 compute-0 nova_compute[185650]: 2026-01-27 22:38:43.424 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:38:43 compute-0 nova_compute[185650]: 2026-01-27 22:38:43.425 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:38:43 compute-0 nova_compute[185650]: 2026-01-27 22:38:43.438 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 22:38:43 compute-0 nova_compute[185650]: 2026-01-27 22:38:43.440 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:38:43 compute-0 nova_compute[185650]: 2026-01-27 22:38:43.440 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:38:43 compute-0 sudo[229713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msmmfhrwastbcaxtaxrugbtdilwsyptu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553523.0911553-865-210616430368400/AnsiballZ_file.py'
Jan 27 22:38:43 compute-0 sudo[229713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:43 compute-0 python3.9[229715]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:43 compute-0 sudo[229713]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:43 compute-0 nova_compute[185650]: 2026-01-27 22:38:43.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:38:43 compute-0 nova_compute[185650]: 2026-01-27 22:38:43.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:38:43 compute-0 nova_compute[185650]: 2026-01-27 22:38:43.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:38:44 compute-0 sudo[229865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weyboheyxvizsupyabxzqreyvjpanpjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553524.0730488-874-62466119310730/AnsiballZ_podman_container_info.py'
Jan 27 22:38:44 compute-0 sudo[229865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:44 compute-0 python3.9[229867]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Jan 27 22:38:44 compute-0 sudo[229865]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:44 compute-0 nova_compute[185650]: 2026-01-27 22:38:44.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:38:45 compute-0 podman[230003]: 2026-01-27 22:38:45.38794529 +0000 UTC m=+0.088469960 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 22:38:45 compute-0 sudo[230049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcxgbdwnvdmzcdvardiuiaztwxyzgnfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553525.0084078-882-161306198154397/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:45 compute-0 sudo[230049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:45 compute-0 python3.9[230051]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:45 compute-0 systemd[1]: Started libpod-conmon-0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650.scope.
Jan 27 22:38:45 compute-0 podman[230053]: 2026-01-27 22:38:45.719038847 +0000 UTC m=+0.092486886 container exec 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, name=ubi9, container_name=kepler, vendor=Red Hat, Inc., managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 27 22:38:45 compute-0 podman[230053]: 2026-01-27 22:38:45.750529126 +0000 UTC m=+0.123977155 container exec_died 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, distribution-scope=public)
Jan 27 22:38:45 compute-0 systemd[1]: libpod-conmon-0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650.scope: Deactivated successfully.
Jan 27 22:38:45 compute-0 sudo[230049]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:45 compute-0 nova_compute[185650]: 2026-01-27 22:38:45.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:38:46 compute-0 sudo[230229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iffyiylglphloguxqxtazsivitoteykb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553525.9646144-890-95396497591548/AnsiballZ_podman_container_exec.py'
Jan 27 22:38:46 compute-0 sudo[230229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:46 compute-0 python3.9[230231]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 27 22:38:46 compute-0 systemd[1]: Started libpod-conmon-0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650.scope.
Jan 27 22:38:46 compute-0 podman[230232]: 2026-01-27 22:38:46.587518522 +0000 UTC m=+0.089937759 container exec 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vendor=Red Hat, Inc.)
Jan 27 22:38:46 compute-0 podman[230232]: 2026-01-27 22:38:46.621049383 +0000 UTC m=+0.123468600 container exec_died 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=kepler, managed_by=edpm_ansible, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9, version=9.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=)
Jan 27 22:38:46 compute-0 sudo[230229]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:46 compute-0 podman[230247]: 2026-01-27 22:38:46.668816075 +0000 UTC m=+0.080571215 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release-0.7.12=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=kepler, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Jan 27 22:38:46 compute-0 systemd[1]: libpod-conmon-0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650.scope: Deactivated successfully.
Jan 27 22:38:47 compute-0 sudo[230426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xazehguwbjsgvuoolaqsgdiojamopdxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553526.8415368-898-106169992021089/AnsiballZ_file.py'
Jan 27 22:38:47 compute-0 sudo[230426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:47 compute-0 python3.9[230428]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:47 compute-0 sudo[230426]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:48 compute-0 sudo[230578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifbnkdwlrbxkvflsqosnafxhzjvwyydl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553527.7750523-907-156881247504759/AnsiballZ_file.py'
Jan 27 22:38:48 compute-0 sudo[230578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:48 compute-0 python3.9[230580]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:48 compute-0 sudo[230578]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:48 compute-0 sudo[230747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymdkhjaoidhpbnplljwqnmwihqpwuhuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553528.5546913-915-255890311762604/AnsiballZ_stat.py'
Jan 27 22:38:48 compute-0 sudo[230747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:48 compute-0 podman[230704]: 2026-01-27 22:38:48.947665654 +0000 UTC m=+0.112842434 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 22:38:49 compute-0 python3.9[230753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:38:49 compute-0 sudo[230747]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:49 compute-0 sudo[230880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyrjhsepeezlypmpmixlaunztpehhdqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553528.5546913-915-255890311762604/AnsiballZ_copy.py'
Jan 27 22:38:49 compute-0 sudo[230880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:49 compute-0 python3.9[230882]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769553528.5546913-915-255890311762604/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:50 compute-0 sudo[230880]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:50 compute-0 sudo[231032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxjlfphmwsbrhrbwuayntijozmzwbjof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553530.2823505-931-137585671943760/AnsiballZ_file.py'
Jan 27 22:38:50 compute-0 sudo[231032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:50 compute-0 python3.9[231034]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:50 compute-0 sudo[231032]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:51 compute-0 sudo[231184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxdrdetjeqpapdpqyetbbbvzjypzwij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553531.0766318-939-3174738446908/AnsiballZ_stat.py'
Jan 27 22:38:51 compute-0 sudo[231184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:51 compute-0 python3.9[231186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:38:51 compute-0 sudo[231184]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:51 compute-0 sudo[231262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mipnwuigkermzeylpsbfwgkyeviwpvxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553531.0766318-939-3174738446908/AnsiballZ_file.py'
Jan 27 22:38:52 compute-0 sudo[231262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:52 compute-0 python3.9[231264]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:52 compute-0 sudo[231262]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:52 compute-0 sudo[231414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peqcahzfqojxhzkmzwrkolwjmcfkeuif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553532.3784494-951-114227990026833/AnsiballZ_stat.py'
Jan 27 22:38:52 compute-0 sudo[231414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:52 compute-0 python3.9[231416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:38:52 compute-0 sudo[231414]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:53 compute-0 sudo[231492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxmwbtbkqzqpcqkqhgksiwkyudigixrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553532.3784494-951-114227990026833/AnsiballZ_file.py'
Jan 27 22:38:53 compute-0 sudo[231492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:53 compute-0 python3.9[231494]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.r1pdnf7g recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:53 compute-0 sudo[231492]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:54 compute-0 sudo[231644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsbozbpthounhjnrizuxeufsnhhrthlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553533.6229432-963-39001613449280/AnsiballZ_stat.py'
Jan 27 22:38:54 compute-0 sudo[231644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:54 compute-0 python3.9[231646]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:38:54 compute-0 sudo[231644]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:54 compute-0 sudo[231722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvslmoljvsxaxoeurpkntkisteuxaveb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553533.6229432-963-39001613449280/AnsiballZ_file.py'
Jan 27 22:38:54 compute-0 sudo[231722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:54 compute-0 python3.9[231724]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:54 compute-0 sudo[231722]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:55 compute-0 podman[231824]: 2026-01-27 22:38:55.386091558 +0000 UTC m=+0.083419442 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:38:55 compute-0 sudo[231895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cynsxdllhqdkkhcalwjywvyvxvfjigts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553535.0832114-976-167685379746303/AnsiballZ_command.py'
Jan 27 22:38:55 compute-0 sudo[231895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:55 compute-0 python3.9[231897]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:38:55 compute-0 sudo[231895]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:56 compute-0 sudo[232049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lczumcrmisfuhpwynjwbommnvkjzgted ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553535.9341447-984-129566689943484/AnsiballZ_edpm_nftables_from_files.py'
Jan 27 22:38:56 compute-0 sudo[232049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:56 compute-0 python3[232051]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 27 22:38:56 compute-0 sudo[232049]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:57 compute-0 sudo[232201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cobyvvjeypyvphpobwdnatxqyqcmdsbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553537.0273602-992-218110342160375/AnsiballZ_stat.py'
Jan 27 22:38:57 compute-0 sudo[232201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:57 compute-0 python3.9[232203]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:38:57 compute-0 sudo[232201]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:58 compute-0 sudo[232279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khhspzbzrahqsvltrxfozkhagyahpsxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553537.0273602-992-218110342160375/AnsiballZ_file.py'
Jan 27 22:38:58 compute-0 sudo[232279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:58 compute-0 python3.9[232281]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:58 compute-0 sudo[232279]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:59 compute-0 sudo[232431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twgwztzbelkvfwrjxyvwwpajxstpgsit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553538.571465-1004-6385089946085/AnsiballZ_stat.py'
Jan 27 22:38:59 compute-0 sudo[232431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:59 compute-0 python3.9[232433]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:38:59 compute-0 sudo[232431]: pam_unix(sudo:session): session closed for user root
Jan 27 22:38:59 compute-0 sudo[232509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtyubtmomdltbckcssryhidnsyyobbjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553538.571465-1004-6385089946085/AnsiballZ_file.py'
Jan 27 22:38:59 compute-0 sudo[232509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:38:59 compute-0 podman[232511]: 2026-01-27 22:38:59.710628531 +0000 UTC m=+0.059382439 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 27 22:38:59 compute-0 podman[201529]: time="2026-01-27T22:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:38:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 22:38:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3846 "" "Go-http-client/1.1"
Jan 27 22:38:59 compute-0 python3.9[232512]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:38:59 compute-0 sudo[232509]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:00 compute-0 sudo[232683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqjjhnaejjxoxbecdzmouszqlvzcbayv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553540.0670614-1016-241135803592995/AnsiballZ_stat.py'
Jan 27 22:39:00 compute-0 sudo[232683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:00 compute-0 python3.9[232685]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:39:00 compute-0 sudo[232683]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:01 compute-0 sudo[232761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eucgkvwppweypjusmhnhgffswgejfouv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553540.0670614-1016-241135803592995/AnsiballZ_file.py'
Jan 27 22:39:01 compute-0 sudo[232761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:01 compute-0 python3.9[232763]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:39:01 compute-0 sudo[232761]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:01 compute-0 openstack_network_exporter[204648]: ERROR   22:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:39:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:39:01 compute-0 openstack_network_exporter[204648]: ERROR   22:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:39:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:39:02 compute-0 sudo[232913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmwanndalivgbinerllkedeygjgfvmry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553541.5370958-1028-126757520372407/AnsiballZ_stat.py'
Jan 27 22:39:02 compute-0 sudo[232913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:02 compute-0 python3.9[232915]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:39:02 compute-0 sudo[232913]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:02 compute-0 sudo[232991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkigkbixtxjcrbtukolzttyyeczhbgdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553541.5370958-1028-126757520372407/AnsiballZ_file.py'
Jan 27 22:39:02 compute-0 sudo[232991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:02 compute-0 python3.9[232993]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:39:02 compute-0 sudo[232991]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:03 compute-0 sudo[233143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czzlhburjrzdpsropcmaffsbqtoplasd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553542.9622145-1040-117848853914818/AnsiballZ_stat.py'
Jan 27 22:39:03 compute-0 sudo[233143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:03 compute-0 python3.9[233145]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:39:03 compute-0 sudo[233143]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:39:04.124 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:39:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:39:04.125 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:39:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:39:04.125 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:39:04 compute-0 sudo[233268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnjntqnougyjzmzewtoksnanqefgtewp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553542.9622145-1040-117848853914818/AnsiballZ_copy.py'
Jan 27 22:39:04 compute-0 sudo[233268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:04 compute-0 python3.9[233270]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769553542.9622145-1040-117848853914818/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:39:04 compute-0 sudo[233268]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:05 compute-0 sudo[233420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmicjckatyklxhvihmfcksfpjjdtbmep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553544.6204193-1055-52140510103700/AnsiballZ_file.py'
Jan 27 22:39:05 compute-0 sudo[233420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:05 compute-0 python3.9[233422]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:39:05 compute-0 sudo[233420]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:05 compute-0 sudo[233572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhmdvxdzoxendtxblbwljuxgkjmltsmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553545.4465923-1063-34260652135051/AnsiballZ_command.py'
Jan 27 22:39:05 compute-0 sudo[233572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:05 compute-0 python3.9[233574]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:39:06 compute-0 sudo[233572]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:06 compute-0 sudo[233727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrusjjlmqjpiqktjnayifscysrfsqsbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553546.3227134-1071-158499467840525/AnsiballZ_blockinfile.py'
Jan 27 22:39:06 compute-0 sudo[233727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:07 compute-0 python3.9[233729]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:39:07 compute-0 sudo[233727]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:07 compute-0 sudo[233879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-przecrlyhzpitezyndkzcmxuowdvlvez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553547.5659363-1080-241391316128347/AnsiballZ_command.py'
Jan 27 22:39:07 compute-0 sudo[233879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:08 compute-0 python3.9[233881]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:39:08 compute-0 sudo[233879]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:08 compute-0 podman[233885]: 2026-01-27 22:39:08.363834361 +0000 UTC m=+0.067311568 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:39:08 compute-0 sudo[234050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chfhibmagcospxvltosxsskaekiiycgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553548.4396436-1088-64014933083578/AnsiballZ_stat.py'
Jan 27 22:39:08 compute-0 sudo[234050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:09 compute-0 python3.9[234052]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 27 22:39:09 compute-0 sudo[234050]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:09 compute-0 sudo[234204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvunlfcscsvmajlwokrwprykqieurvmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553549.288934-1096-251451833997103/AnsiballZ_command.py'
Jan 27 22:39:09 compute-0 sudo[234204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:09 compute-0 python3.9[234206]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:39:09 compute-0 sudo[234204]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:10 compute-0 sudo[234359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtmfciskcxiacyhzswayfwranuebotcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553550.0776298-1104-280340505731929/AnsiballZ_file.py'
Jan 27 22:39:10 compute-0 sudo[234359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:10 compute-0 podman[234361]: 2026-01-27 22:39:10.585223429 +0000 UTC m=+0.113020560 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 22:39:10 compute-0 python3.9[234362]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:39:10 compute-0 sudo[234359]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:11 compute-0 sshd-session[213438]: Connection closed by 192.168.122.30 port 53754
Jan 27 22:39:11 compute-0 sshd-session[213435]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:39:11 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 27 22:39:11 compute-0 systemd[1]: session-26.scope: Consumed 1min 28.217s CPU time.
Jan 27 22:39:11 compute-0 systemd-logind[789]: Session 26 logged out. Waiting for processes to exit.
Jan 27 22:39:11 compute-0 systemd-logind[789]: Removed session 26.
Jan 27 22:39:13 compute-0 podman[234404]: 2026-01-27 22:39:13.378165332 +0000 UTC m=+0.080013568 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:39:16 compute-0 podman[234428]: 2026-01-27 22:39:16.380926434 +0000 UTC m=+0.083724120 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:39:16 compute-0 sshd-session[234449]: Accepted publickey for zuul from 192.168.122.30 port 57728 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 22:39:16 compute-0 systemd-logind[789]: New session 27 of user zuul.
Jan 27 22:39:16 compute-0 systemd[1]: Started Session 27 of User zuul.
Jan 27 22:39:16 compute-0 sshd-session[234449]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:39:17 compute-0 podman[234552]: 2026-01-27 22:39:17.423969627 +0000 UTC m=+0.114798388 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.buildah.version=1.29.0)
Jan 27 22:39:17 compute-0 python3.9[234622]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:39:19 compute-0 sudo[234789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmlqyfzjgqmdiernqosiojhqtdyehlcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553558.4844763-29-44069469777647/AnsiballZ_systemd.py'
Jan 27 22:39:19 compute-0 sudo[234789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:19 compute-0 podman[234750]: 2026-01-27 22:39:19.381685351 +0000 UTC m=+0.129002770 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 22:39:19 compute-0 python3.9[234796]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Jan 27 22:39:19 compute-0 sudo[234789]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:20 compute-0 sudo[234956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nidpertmswzpxnthausznbwngyoqukwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553559.984211-37-176897241160732/AnsiballZ_setup.py'
Jan 27 22:39:20 compute-0 sudo[234956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:20 compute-0 python3.9[234958]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 27 22:39:21 compute-0 sudo[234956]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:21 compute-0 sudo[235040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeggsydqaqjxpgicjokltfdpzfdihwfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553559.984211-37-176897241160732/AnsiballZ_dnf.py'
Jan 27 22:39:21 compute-0 sudo[235040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:21 compute-0 python3.9[235042]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 27 22:39:24 compute-0 sudo[235040]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:25 compute-0 sudo[235198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqjvynkojueromrbksjaxxjtgvwtybbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553564.7917466-49-229984937244600/AnsiballZ_stat.py'
Jan 27 22:39:25 compute-0 sudo[235198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:25 compute-0 python3.9[235200]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:39:25 compute-0 sudo[235198]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:26 compute-0 sudo[235338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-milenpyaoywzfjlqguzbnfoadmifzebh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553564.7917466-49-229984937244600/AnsiballZ_copy.py'
Jan 27 22:39:26 compute-0 sudo[235338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:26 compute-0 podman[235295]: 2026-01-27 22:39:26.291421915 +0000 UTC m=+0.071205084 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 22:39:26 compute-0 python3.9[235347]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769553564.7917466-49-229984937244600/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:39:26 compute-0 sudo[235338]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:27 compute-0 sudo[235497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrjowtrspqbzckdpahqdvicehlcsshhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553566.7258272-64-210701282039284/AnsiballZ_file.py'
Jan 27 22:39:27 compute-0 sudo[235497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:27 compute-0 python3.9[235499]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:39:27 compute-0 sudo[235497]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:28 compute-0 sudo[235649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaiciqlmouwjximetsxtlwikotvbvekz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553567.684528-72-271786929380072/AnsiballZ_stat.py'
Jan 27 22:39:28 compute-0 sudo[235649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:28 compute-0 python3.9[235651]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 27 22:39:28 compute-0 sudo[235649]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:28 compute-0 sudo[235772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcaldcdrovdftjbahlapgffvknemncqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553567.684528-72-271786929380072/AnsiballZ_copy.py'
Jan 27 22:39:28 compute-0 sudo[235772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:28 compute-0 python3.9[235774]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769553567.684528-72-271786929380072/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 27 22:39:28 compute-0 sudo[235772]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:29 compute-0 sudo[235924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfphvsrungaswkdrvlfpjivkbvmcqfbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769553569.0155518-87-68047511175319/AnsiballZ_systemd.py'
Jan 27 22:39:29 compute-0 sudo[235924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:39:29 compute-0 python3.9[235926]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 27 22:39:29 compute-0 systemd[1]: Stopping System Logging Service...
Jan 27 22:39:29 compute-0 podman[201529]: time="2026-01-27T22:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:39:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 22:39:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3846 "" "Go-http-client/1.1"
Jan 27 22:39:29 compute-0 podman[235930]: 2026-01-27 22:39:29.840652022 +0000 UTC m=+0.096865803 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 27 22:39:30 compute-0 rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] exiting on signal 15.
Jan 27 22:39:30 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Jan 27 22:39:30 compute-0 systemd[1]: Stopped System Logging Service.
Jan 27 22:39:30 compute-0 systemd[1]: rsyslog.service: Consumed 3.914s CPU time, 8.8M memory peak, read 0B from disk, written 5.3M to disk.
Jan 27 22:39:30 compute-0 systemd[1]: Starting System Logging Service...
Jan 27 22:39:30 compute-0 rsyslogd[235951]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="235951" x-info="https://www.rsyslog.com"] start
Jan 27 22:39:30 compute-0 systemd[1]: Started System Logging Service.
Jan 27 22:39:30 compute-0 rsyslogd[235951]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 22:39:30 compute-0 rsyslogd[235951]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Jan 27 22:39:30 compute-0 rsyslogd[235951]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Jan 27 22:39:30 compute-0 rsyslogd[235951]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Jan 27 22:39:30 compute-0 sudo[235924]: pam_unix(sudo:session): session closed for user root
Jan 27 22:39:30 compute-0 rsyslogd[235951]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Jan 27 22:39:30 compute-0 sshd-session[234452]: Connection closed by 192.168.122.30 port 57728
Jan 27 22:39:30 compute-0 sshd-session[234449]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:39:30 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 27 22:39:30 compute-0 systemd[1]: session-27.scope: Consumed 10.386s CPU time.
Jan 27 22:39:30 compute-0 systemd-logind[789]: Session 27 logged out. Waiting for processes to exit.
Jan 27 22:39:30 compute-0 systemd-logind[789]: Removed session 27.
Jan 27 22:39:31 compute-0 openstack_network_exporter[204648]: ERROR   22:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:39:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:39:31 compute-0 openstack_network_exporter[204648]: ERROR   22:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:39:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.101 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.102 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.119 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.120 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b401190>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:39:38.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:39:39 compute-0 podman[235981]: 2026-01-27 22:39:39.377564029 +0000 UTC m=+0.078427824 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true)
Jan 27 22:39:39 compute-0 nova_compute[185650]: 2026-01-27 22:39:39.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:39 compute-0 nova_compute[185650]: 2026-01-27 22:39:39.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 22:39:40 compute-0 nova_compute[185650]: 2026-01-27 22:39:40.011 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 22:39:40 compute-0 nova_compute[185650]: 2026-01-27 22:39:40.011 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:40 compute-0 nova_compute[185650]: 2026-01-27 22:39:40.012 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 22:39:40 compute-0 nova_compute[185650]: 2026-01-27 22:39:40.023 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:41 compute-0 podman[236001]: 2026-01-27 22:39:41.39938548 +0000 UTC m=+0.106612941 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.038 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.068 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.069 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.069 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.070 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.453 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.454 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5745MB free_disk=72.4754524230957GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.454 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.455 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.592 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.592 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.671 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.687 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.688 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:39:42 compute-0 nova_compute[185650]: 2026-01-27 22:39:42.689 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:39:44 compute-0 podman[236020]: 2026-01-27 22:39:44.389813271 +0000 UTC m=+0.088776060 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:39:44 compute-0 nova_compute[185650]: 2026-01-27 22:39:44.644 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:44 compute-0 nova_compute[185650]: 2026-01-27 22:39:44.645 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:39:44 compute-0 nova_compute[185650]: 2026-01-27 22:39:44.645 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:39:44 compute-0 nova_compute[185650]: 2026-01-27 22:39:44.663 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 22:39:44 compute-0 nova_compute[185650]: 2026-01-27 22:39:44.663 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:44 compute-0 nova_compute[185650]: 2026-01-27 22:39:44.664 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:44 compute-0 nova_compute[185650]: 2026-01-27 22:39:44.664 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:44 compute-0 nova_compute[185650]: 2026-01-27 22:39:44.665 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:39:44 compute-0 nova_compute[185650]: 2026-01-27 22:39:44.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:44 compute-0 nova_compute[185650]: 2026-01-27 22:39:44.995 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:46 compute-0 nova_compute[185650]: 2026-01-27 22:39:46.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:47 compute-0 podman[236045]: 2026-01-27 22:39:47.363276044 +0000 UTC m=+0.067367309 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 22:39:47 compute-0 nova_compute[185650]: 2026-01-27 22:39:47.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:39:48 compute-0 podman[236065]: 2026-01-27 22:39:48.376188766 +0000 UTC m=+0.076668747 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 22:39:50 compute-0 podman[236086]: 2026-01-27 22:39:50.441940059 +0000 UTC m=+0.129443221 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Jan 27 22:39:53 compute-0 sshd-session[236112]: Received disconnect from 45.227.254.170 port 40592:11:  [preauth]
Jan 27 22:39:53 compute-0 sshd-session[236112]: Disconnected from authenticating user root 45.227.254.170 port 40592 [preauth]
Jan 27 22:39:57 compute-0 podman[236114]: 2026-01-27 22:39:57.37944164 +0000 UTC m=+0.078870497 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 22:39:59 compute-0 podman[201529]: time="2026-01-27T22:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:39:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 22:39:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3848 "" "Go-http-client/1.1"
Jan 27 22:40:00 compute-0 podman[236139]: 2026-01-27 22:40:00.434201614 +0000 UTC m=+0.124347206 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 22:40:01 compute-0 openstack_network_exporter[204648]: ERROR   22:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:40:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:40:01 compute-0 openstack_network_exporter[204648]: ERROR   22:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:40:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:40:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:40:04.125 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:40:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:40:04.126 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:40:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:40:04.126 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:40:10 compute-0 podman[236159]: 2026-01-27 22:40:10.457444103 +0000 UTC m=+0.145950066 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:40:12 compute-0 podman[236178]: 2026-01-27 22:40:12.432005294 +0000 UTC m=+0.134928322 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:40:13 compute-0 sshd-session[236196]: Accepted publickey for zuul from 38.102.83.151 port 33810 ssh2: RSA SHA256:ZuKoWm/C8Whnhgf9tPVFWdXLNeFqjD7XfMzDvbUlFFI
Jan 27 22:40:13 compute-0 systemd-logind[789]: New session 28 of user zuul.
Jan 27 22:40:13 compute-0 systemd[1]: Started Session 28 of User zuul.
Jan 27 22:40:13 compute-0 sshd-session[236196]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:40:14 compute-0 podman[236330]: 2026-01-27 22:40:14.751049879 +0000 UTC m=+0.071861695 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:40:15 compute-0 python3[236395]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:40:16 compute-0 sudo[236616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyhqucrxvyrymvjtbrtryimefkwvevcf ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553616.2815368-37145-156743041179076/AnsiballZ_command.py'
Jan 27 22:40:16 compute-0 sudo[236616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:40:16 compute-0 python3[236618]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:40:17 compute-0 sudo[236616]: pam_unix(sudo:session): session closed for user root
Jan 27 22:40:17 compute-0 sudo[236785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcoeahjdcbsspedgfuivypsxxcstmbra ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553617.4380338-37156-216594236162131/AnsiballZ_command.py'
Jan 27 22:40:17 compute-0 sudo[236785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:40:17 compute-0 podman[236744]: 2026-01-27 22:40:17.883068561 +0000 UTC m=+0.072293432 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 22:40:18 compute-0 python3[236790]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:40:19 compute-0 podman[236793]: 2026-01-27 22:40:19.402592475 +0000 UTC m=+0.103578462 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, vcs-type=git, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9)
Jan 27 22:40:19 compute-0 sudo[236785]: pam_unix(sudo:session): session closed for user root
Jan 27 22:40:20 compute-0 podman[236935]: 2026-01-27 22:40:20.934217061 +0000 UTC m=+0.127147315 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 22:40:21 compute-0 python3[236976]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 27 22:40:21 compute-0 sudo[237134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwwmqzicudecevqflsjrgkitwcbbhwsg ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553621.4840128-37202-87572756727112/AnsiballZ_setup.py'
Jan 27 22:40:21 compute-0 sudo[237134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:40:22 compute-0 python3[237136]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 27 22:40:23 compute-0 sudo[237134]: pam_unix(sudo:session): session closed for user root
Jan 27 22:40:24 compute-0 sudo[237359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgjmmuvlslhyyomcigknktrspewzwuxv ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553623.9326944-37233-102307452882520/AnsiballZ_command.py'
Jan 27 22:40:24 compute-0 sudo[237359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:40:24 compute-0 python3[237361]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:40:24 compute-0 sudo[237359]: pam_unix(sudo:session): session closed for user root
Jan 27 22:40:25 compute-0 sudo[237524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhfkczjuqstvpldxrlrzfqpazfcnhyns ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769553624.963313-37250-167867784542846/AnsiballZ_command.py'
Jan 27 22:40:25 compute-0 sudo[237524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:40:25 compute-0 python3[237526]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:40:25 compute-0 sudo[237524]: pam_unix(sudo:session): session closed for user root
Jan 27 22:40:28 compute-0 podman[237565]: 2026-01-27 22:40:28.378495677 +0000 UTC m=+0.073089935 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 22:40:29 compute-0 podman[201529]: time="2026-01-27T22:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:40:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 22:40:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3846 "" "Go-http-client/1.1"
Jan 27 22:40:31 compute-0 podman[237589]: 2026-01-27 22:40:31.384015256 +0000 UTC m=+0.088127429 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, architecture=x86_64, name=ubi9-minimal)
Jan 27 22:40:31 compute-0 openstack_network_exporter[204648]: ERROR   22:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:40:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:40:31 compute-0 openstack_network_exporter[204648]: ERROR   22:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:40:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:40:41 compute-0 podman[237610]: 2026-01-27 22:40:41.40866147 +0000 UTC m=+0.096199372 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, managed_by=edpm_ansible)
Jan 27 22:40:41 compute-0 nova_compute[185650]: 2026-01-27 22:40:41.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.022 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.023 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.023 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.024 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.406 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.407 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5724MB free_disk=72.47546005249023GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.407 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.408 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.469 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.469 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.481 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing inventories for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.539 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating ProviderTree inventory for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.540 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating inventory in ProviderTree for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.553 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing aggregate associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.570 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing trait associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_AVX,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.592 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.607 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.608 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:40:42 compute-0 nova_compute[185650]: 2026-01-27 22:40:42.609 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:40:43 compute-0 podman[237630]: 2026-01-27 22:40:43.360979311 +0000 UTC m=+0.063725781 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true)
Jan 27 22:40:44 compute-0 nova_compute[185650]: 2026-01-27 22:40:44.610 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:40:44 compute-0 nova_compute[185650]: 2026-01-27 22:40:44.610 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:40:44 compute-0 nova_compute[185650]: 2026-01-27 22:40:44.611 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:40:44 compute-0 nova_compute[185650]: 2026-01-27 22:40:44.990 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:40:45 compute-0 nova_compute[185650]: 2026-01-27 22:40:45.012 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:40:45 compute-0 nova_compute[185650]: 2026-01-27 22:40:45.012 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:40:45 compute-0 nova_compute[185650]: 2026-01-27 22:40:45.013 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:40:45 compute-0 nova_compute[185650]: 2026-01-27 22:40:45.030 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 22:40:45 compute-0 nova_compute[185650]: 2026-01-27 22:40:45.031 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:40:45 compute-0 podman[237649]: 2026-01-27 22:40:45.44182843 +0000 UTC m=+0.135479662 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:40:45 compute-0 nova_compute[185650]: 2026-01-27 22:40:45.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:40:46 compute-0 nova_compute[185650]: 2026-01-27 22:40:46.988 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:40:48 compute-0 podman[237672]: 2026-01-27 22:40:48.36141777 +0000 UTC m=+0.063980086 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 22:40:48 compute-0 nova_compute[185650]: 2026-01-27 22:40:48.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:40:49 compute-0 nova_compute[185650]: 2026-01-27 22:40:49.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:40:50 compute-0 podman[237693]: 2026-01-27 22:40:50.368748096 +0000 UTC m=+0.073359090 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=kepler, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 22:40:51 compute-0 podman[237713]: 2026-01-27 22:40:51.407795534 +0000 UTC m=+0.111819552 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:40:59 compute-0 podman[237740]: 2026-01-27 22:40:59.430540778 +0000 UTC m=+0.125305732 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:40:59 compute-0 podman[201529]: time="2026-01-27T22:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:40:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 22:40:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3839 "" "Go-http-client/1.1"
Jan 27 22:41:01 compute-0 openstack_network_exporter[204648]: ERROR   22:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:41:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:41:01 compute-0 openstack_network_exporter[204648]: ERROR   22:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:41:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:41:02 compute-0 podman[237764]: 2026-01-27 22:41:02.384264783 +0000 UTC m=+0.087483080 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 27 22:41:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:41:04.127 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:41:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:41:04.127 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:41:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:41:04.127 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:41:12 compute-0 podman[237783]: 2026-01-27 22:41:12.408081728 +0000 UTC m=+0.108073724 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 22:41:14 compute-0 podman[237803]: 2026-01-27 22:41:14.40888674 +0000 UTC m=+0.098008072 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 22:41:16 compute-0 podman[237821]: 2026-01-27 22:41:16.371870929 +0000 UTC m=+0.078869566 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:41:19 compute-0 podman[237845]: 2026-01-27 22:41:19.377777226 +0000 UTC m=+0.079743908 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 22:41:21 compute-0 podman[237864]: 2026-01-27 22:41:21.391802467 +0000 UTC m=+0.099457349 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, name=ubi9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git)
Jan 27 22:41:22 compute-0 podman[237883]: 2026-01-27 22:41:22.433224655 +0000 UTC m=+0.127085786 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 22:41:25 compute-0 sshd-session[236199]: Received disconnect from 38.102.83.151 port 33810:11: disconnected by user
Jan 27 22:41:25 compute-0 sshd-session[236199]: Disconnected from user zuul 38.102.83.151 port 33810
Jan 27 22:41:25 compute-0 sshd-session[236196]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:41:25 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 27 22:41:25 compute-0 systemd[1]: session-28.scope: Consumed 9.403s CPU time.
Jan 27 22:41:25 compute-0 systemd-logind[789]: Session 28 logged out. Waiting for processes to exit.
Jan 27 22:41:25 compute-0 systemd-logind[789]: Removed session 28.
Jan 27 22:41:29 compute-0 podman[201529]: time="2026-01-27T22:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:41:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 22:41:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3846 "" "Go-http-client/1.1"
Jan 27 22:41:30 compute-0 podman[237908]: 2026-01-27 22:41:30.39209961 +0000 UTC m=+0.093957381 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 22:41:31 compute-0 openstack_network_exporter[204648]: ERROR   22:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:41:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:41:31 compute-0 openstack_network_exporter[204648]: ERROR   22:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:41:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:41:33 compute-0 podman[237930]: 2026-01-27 22:41:33.426916186 +0000 UTC m=+0.114648406 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=)
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.101 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.102 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19bbc0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:41:38.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:41:41 compute-0 nova_compute[185650]: 2026-01-27 22:41:41.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.020 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.021 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.021 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.022 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.314 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.315 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5722MB free_disk=72.4754409790039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.315 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.315 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.375 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.376 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.396 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.409 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.410 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:41:42 compute-0 nova_compute[185650]: 2026-01-27 22:41:42.411 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:41:43 compute-0 podman[237951]: 2026-01-27 22:41:43.363077118 +0000 UTC m=+0.069240815 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute)
Jan 27 22:41:44 compute-0 nova_compute[185650]: 2026-01-27 22:41:44.411 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:41:44 compute-0 podman[237971]: 2026-01-27 22:41:44.782601393 +0000 UTC m=+0.104744460 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 22:41:44 compute-0 nova_compute[185650]: 2026-01-27 22:41:44.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:41:44 compute-0 nova_compute[185650]: 2026-01-27 22:41:44.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:41:44 compute-0 nova_compute[185650]: 2026-01-27 22:41:44.995 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:41:45 compute-0 nova_compute[185650]: 2026-01-27 22:41:45.017 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 22:41:45 compute-0 nova_compute[185650]: 2026-01-27 22:41:45.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:41:45 compute-0 nova_compute[185650]: 2026-01-27 22:41:45.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:41:45 compute-0 nova_compute[185650]: 2026-01-27 22:41:45.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:41:46 compute-0 nova_compute[185650]: 2026-01-27 22:41:46.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:41:47 compute-0 podman[237990]: 2026-01-27 22:41:47.416227896 +0000 UTC m=+0.113640081 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:41:47 compute-0 nova_compute[185650]: 2026-01-27 22:41:47.987 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:41:48 compute-0 nova_compute[185650]: 2026-01-27 22:41:48.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:41:49 compute-0 nova_compute[185650]: 2026-01-27 22:41:49.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:41:50 compute-0 podman[238014]: 2026-01-27 22:41:50.366987418 +0000 UTC m=+0.067111163 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 22:41:52 compute-0 podman[238033]: 2026-01-27 22:41:52.433960118 +0000 UTC m=+0.113952719 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, container_name=kepler, maintainer=Red Hat, Inc.)
Jan 27 22:41:52 compute-0 podman[238052]: 2026-01-27 22:41:52.570003336 +0000 UTC m=+0.099526699 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 27 22:41:59 compute-0 podman[201529]: time="2026-01-27T22:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:41:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 22:41:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3853 "" "Go-http-client/1.1"
Jan 27 22:42:01 compute-0 podman[238078]: 2026-01-27 22:42:01.356604213 +0000 UTC m=+0.057481272 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 22:42:01 compute-0 openstack_network_exporter[204648]: ERROR   22:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:42:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:42:01 compute-0 openstack_network_exporter[204648]: ERROR   22:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:42:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:42:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:42:04.128 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:42:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:42:04.129 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:42:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:42:04.129 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:42:04 compute-0 podman[238102]: 2026-01-27 22:42:04.421554164 +0000 UTC m=+0.115780326 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 27 22:42:14 compute-0 podman[238123]: 2026-01-27 22:42:14.390341425 +0000 UTC m=+0.087572305 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true)
Jan 27 22:42:15 compute-0 podman[238144]: 2026-01-27 22:42:15.398371711 +0000 UTC m=+0.081242179 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 22:42:18 compute-0 podman[238163]: 2026-01-27 22:42:18.371302555 +0000 UTC m=+0.074745308 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:42:21 compute-0 podman[238187]: 2026-01-27 22:42:21.396786994 +0000 UTC m=+0.097854509 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 22:42:23 compute-0 podman[238208]: 2026-01-27 22:42:23.418394062 +0000 UTC m=+0.109221224 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, architecture=x86_64, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, release=1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public)
Jan 27 22:42:23 compute-0 podman[238209]: 2026-01-27 22:42:23.457885854 +0000 UTC m=+0.150422264 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 22:42:29 compute-0 podman[201529]: time="2026-01-27T22:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:42:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 22:42:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3845 "" "Go-http-client/1.1"
Jan 27 22:42:30 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:42:30.240 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:41:f8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '26:ae:8e:b8:80:28'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:42:30 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:42:30.242 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 22:42:30 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:42:30.244 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:42:31 compute-0 openstack_network_exporter[204648]: ERROR   22:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:42:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:42:31 compute-0 openstack_network_exporter[204648]: ERROR   22:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:42:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:42:32 compute-0 podman[238255]: 2026-01-27 22:42:32.37381077 +0000 UTC m=+0.069351231 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:42:35 compute-0 podman[238280]: 2026-01-27 22:42:35.388028406 +0000 UTC m=+0.092665584 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, release=1755695350, managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 27 22:42:43 compute-0 nova_compute[185650]: 2026-01-27 22:42:43.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:42:43 compute-0 nova_compute[185650]: 2026-01-27 22:42:43.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.016 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.017 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.017 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.017 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.328 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.329 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5724MB free_disk=72.47546005249023GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.329 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.330 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.401 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.402 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.423 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.434 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.436 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:42:44 compute-0 nova_compute[185650]: 2026-01-27 22:42:44.436 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:42:44 compute-0 podman[238301]: 2026-01-27 22:42:44.789747874 +0000 UTC m=+0.109780579 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 27 22:42:46 compute-0 podman[238319]: 2026-01-27 22:42:46.413119881 +0000 UTC m=+0.117531024 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 27 22:42:46 compute-0 nova_compute[185650]: 2026-01-27 22:42:46.436 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:42:46 compute-0 nova_compute[185650]: 2026-01-27 22:42:46.437 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:42:46 compute-0 nova_compute[185650]: 2026-01-27 22:42:46.437 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:42:46 compute-0 nova_compute[185650]: 2026-01-27 22:42:46.451 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 22:42:46 compute-0 nova_compute[185650]: 2026-01-27 22:42:46.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:42:46 compute-0 nova_compute[185650]: 2026-01-27 22:42:46.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:42:47 compute-0 nova_compute[185650]: 2026-01-27 22:42:47.989 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:42:48 compute-0 nova_compute[185650]: 2026-01-27 22:42:48.004 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:42:48 compute-0 nova_compute[185650]: 2026-01-27 22:42:48.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:42:49 compute-0 podman[238338]: 2026-01-27 22:42:49.431414862 +0000 UTC m=+0.123865990 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:42:49 compute-0 nova_compute[185650]: 2026-01-27 22:42:49.988 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:42:49 compute-0 nova_compute[185650]: 2026-01-27 22:42:49.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:42:49 compute-0 nova_compute[185650]: 2026-01-27 22:42:49.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:42:52 compute-0 podman[238364]: 2026-01-27 22:42:52.374568382 +0000 UTC m=+0.082506172 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 22:42:54 compute-0 podman[238384]: 2026-01-27 22:42:54.395437979 +0000 UTC m=+0.101351526 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.buildah.version=1.29.0, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9)
Jan 27 22:42:54 compute-0 podman[238385]: 2026-01-27 22:42:54.414690851 +0000 UTC m=+0.115840786 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:42:59 compute-0 podman[201529]: time="2026-01-27T22:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:42:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 22:42:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3852 "" "Go-http-client/1.1"
Jan 27 22:43:01 compute-0 openstack_network_exporter[204648]: ERROR   22:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:43:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:43:01 compute-0 openstack_network_exporter[204648]: ERROR   22:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:43:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:43:03 compute-0 podman[238426]: 2026-01-27 22:43:03.384938491 +0000 UTC m=+0.090368222 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 22:43:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:04.130 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:04.131 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:04.131 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:06 compute-0 podman[238449]: 2026-01-27 22:43:06.45443447 +0000 UTC m=+0.143671897 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_id=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, architecture=x86_64)
Jan 27 22:43:15 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:15.028 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:41:f8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '26:ae:8e:b8:80:28'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:43:15 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:15.030 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 22:43:15 compute-0 podman[238471]: 2026-01-27 22:43:15.354149352 +0000 UTC m=+0.060914082 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 22:43:17 compute-0 podman[238488]: 2026-01-27 22:43:17.415791974 +0000 UTC m=+0.111593907 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 27 22:43:19 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:19.032 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:43:20 compute-0 podman[238507]: 2026-01-27 22:43:20.392891364 +0000 UTC m=+0.084463723 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 22:43:23 compute-0 podman[238532]: 2026-01-27 22:43:23.367639006 +0000 UTC m=+0.068778147 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 22:43:24 compute-0 nova_compute[185650]: 2026-01-27 22:43:24.940 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:24 compute-0 nova_compute[185650]: 2026-01-27 22:43:24.940 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:24 compute-0 nova_compute[185650]: 2026-01-27 22:43:24.960 185654 DEBUG nova.compute.manager [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.074 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.075 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.082 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.083 185654 INFO nova.compute.claims [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Claim successful on node compute-0.ctlplane.example.com
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.189 185654 DEBUG nova.compute.provider_tree [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.200 185654 DEBUG nova.scheduler.client.report [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.219 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.219 185654 DEBUG nova.compute.manager [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.260 185654 DEBUG nova.compute.manager [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.261 185654 DEBUG nova.network.neutron [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.282 185654 INFO nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.319 185654 DEBUG nova.compute.manager [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 22:43:25 compute-0 podman[238551]: 2026-01-27 22:43:25.373157864 +0000 UTC m=+0.076568813 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=kepler, io.openshift.tags=base rhel9, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30)
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.397 185654 DEBUG nova.compute.manager [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.398 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.398 185654 INFO nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Creating image(s)
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.398 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "/var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.399 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.399 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.399 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5c90c71330689347f3144a95195c41f3e929b39e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.400 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5c90c71330689347f3144a95195c41f3e929b39e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:25 compute-0 podman[238552]: 2026-01-27 22:43:25.429093285 +0000 UTC m=+0.113138707 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.991 185654 WARNING oslo_policy.policy [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 27 22:43:25 compute-0 nova_compute[185650]: 2026-01-27 22:43:25.992 185654 WARNING oslo_policy.policy [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 27 22:43:26 compute-0 nova_compute[185650]: 2026-01-27 22:43:26.617 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:26 compute-0 nova_compute[185650]: 2026-01-27 22:43:26.635 185654 DEBUG nova.network.neutron [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Successfully created port: 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 22:43:26 compute-0 nova_compute[185650]: 2026-01-27 22:43:26.681 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e.part --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:26 compute-0 nova_compute[185650]: 2026-01-27 22:43:26.683 185654 DEBUG nova.virt.images [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] 7e803ca7-2382-4e5a-95f7-55acaa154415 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 27 22:43:26 compute-0 nova_compute[185650]: 2026-01-27 22:43:26.685 185654 DEBUG nova.privsep.utils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 22:43:26 compute-0 nova_compute[185650]: 2026-01-27 22:43:26.686 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e.part /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:26 compute-0 nova_compute[185650]: 2026-01-27 22:43:26.867 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e.part /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e.converted" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:26 compute-0 nova_compute[185650]: 2026-01-27 22:43:26.870 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:26 compute-0 nova_compute[185650]: 2026-01-27 22:43:26.927 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e.converted --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:26 compute-0 nova_compute[185650]: 2026-01-27 22:43:26.928 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5c90c71330689347f3144a95195c41f3e929b39e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:26 compute-0 nova_compute[185650]: 2026-01-27 22:43:26.939 185654 INFO oslo.privsep.daemon [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpnmq907bv/privsep.sock']
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.625 185654 INFO oslo.privsep.daemon [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Spawned new privsep daemon via rootwrap
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.485 238611 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.488 238611 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.490 238611 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.490 238611 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238611
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.699 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.752 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.753 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5c90c71330689347f3144a95195c41f3e929b39e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.754 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5c90c71330689347f3144a95195c41f3e929b39e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.764 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.816 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.817 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e,backing_fmt=raw /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.850 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e,backing_fmt=raw /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk 1073741824" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.851 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5c90c71330689347f3144a95195c41f3e929b39e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.852 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.925 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.926 185654 DEBUG nova.virt.disk.api [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Checking if we can resize image /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.926 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.987 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.988 185654 DEBUG nova.virt.disk.api [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Cannot resize image /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 22:43:27 compute-0 nova_compute[185650]: 2026-01-27 22:43:27.988 185654 DEBUG nova.objects.instance [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'migration_context' on Instance uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.005 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "/var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.006 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.007 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.007 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.008 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.009 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.032 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.034 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.068 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.069 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.085 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.169 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.170 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.171 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.181 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.238 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.239 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.275 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.276 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.276 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.331 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.332 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.332 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Ensure instance console log exists: /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.333 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.333 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:28 compute-0 nova_compute[185650]: 2026-01-27 22:43:28.333 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:29 compute-0 podman[201529]: time="2026-01-27T22:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:43:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 22:43:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3856 "" "Go-http-client/1.1"
Jan 27 22:43:30 compute-0 nova_compute[185650]: 2026-01-27 22:43:30.596 185654 DEBUG nova.network.neutron [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Successfully updated port: 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 22:43:30 compute-0 nova_compute[185650]: 2026-01-27 22:43:30.612 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:43:30 compute-0 nova_compute[185650]: 2026-01-27 22:43:30.613 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquired lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:43:30 compute-0 nova_compute[185650]: 2026-01-27 22:43:30.613 185654 DEBUG nova.network.neutron [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 22:43:30 compute-0 nova_compute[185650]: 2026-01-27 22:43:30.778 185654 DEBUG nova.network.neutron [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 22:43:31 compute-0 nova_compute[185650]: 2026-01-27 22:43:31.060 185654 DEBUG nova.compute.manager [req-2a249825-7a01-4252-ac4e-4baa2feea1d2 req-9777afdd-3a73-4a44-87f0-9cdff14b7f6a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Received event network-changed-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:43:31 compute-0 nova_compute[185650]: 2026-01-27 22:43:31.061 185654 DEBUG nova.compute.manager [req-2a249825-7a01-4252-ac4e-4baa2feea1d2 req-9777afdd-3a73-4a44-87f0-9cdff14b7f6a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Refreshing instance network info cache due to event network-changed-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 22:43:31 compute-0 nova_compute[185650]: 2026-01-27 22:43:31.061 185654 DEBUG oslo_concurrency.lockutils [req-2a249825-7a01-4252-ac4e-4baa2feea1d2 req-9777afdd-3a73-4a44-87f0-9cdff14b7f6a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:43:31 compute-0 openstack_network_exporter[204648]: ERROR   22:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:43:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:43:31 compute-0 openstack_network_exporter[204648]: ERROR   22:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:43:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.073 185654 DEBUG nova.network.neutron [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.094 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Releasing lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.095 185654 DEBUG nova.compute.manager [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Instance network_info: |[{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.096 185654 DEBUG oslo_concurrency.lockutils [req-2a249825-7a01-4252-ac4e-4baa2feea1d2 req-9777afdd-3a73-4a44-87f0-9cdff14b7f6a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.097 185654 DEBUG nova.network.neutron [req-2a249825-7a01-4252-ac4e-4baa2feea1d2 req-9777afdd-3a73-4a44-87f0-9cdff14b7f6a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Refreshing network info cache for port 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.103 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Start _get_guest_xml network_info=[{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T22:42:20Z,direct_url=<?>,disk_format='qcow2',id=7e803ca7-2382-4e5a-95f7-55acaa154415,min_disk=0,min_ram=0,name='cirros',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T22:42:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}], 'ephemerals': [{'size': 1, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.115 185654 WARNING nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.128 185654 DEBUG nova.virt.libvirt.host [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.129 185654 DEBUG nova.virt.libvirt.host [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.135 185654 DEBUG nova.virt.libvirt.host [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.136 185654 DEBUG nova.virt.libvirt.host [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.137 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.138 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T22:42:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T22:42:20Z,direct_url=<?>,disk_format='qcow2',id=7e803ca7-2382-4e5a-95f7-55acaa154415,min_disk=0,min_ram=0,name='cirros',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T22:42:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.139 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.140 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.141 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.142 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.142 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.143 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.144 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.144 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.145 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.146 185654 DEBUG nova.virt.hardware [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.154 185654 DEBUG nova.privsep.utils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.156 185654 DEBUG nova.virt.libvirt.vif [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T22:43:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-rgck83ce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T22:43:25Z,user_data=None,user_id='7387204f74504e288ed7a5dee73f5083',uuid=344c74c3-95d6-4f19-993f-b4a89c9d074b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.157 185654 DEBUG nova.network.os_vif_util [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.159 185654 DEBUG nova.network.os_vif_util [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:72:fe,bridge_name='br-int',has_traffic_filtering=True,id=389fa2e1-24bb-48bb-a577-b2f7ade8ddc5,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap389fa2e1-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.162 185654 DEBUG nova.objects.instance [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.178 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] End _get_guest_xml xml=<domain type="kvm">
Jan 27 22:43:32 compute-0 nova_compute[185650]:   <uuid>344c74c3-95d6-4f19-993f-b4a89c9d074b</uuid>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   <name>instance-00000001</name>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   <memory>524288</memory>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   <vcpu>1</vcpu>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   <metadata>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <nova:name>test_0</nova:name>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <nova:creationTime>2026-01-27 22:43:32</nova:creationTime>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <nova:flavor name="m1.small">
Jan 27 22:43:32 compute-0 nova_compute[185650]:         <nova:memory>512</nova:memory>
Jan 27 22:43:32 compute-0 nova_compute[185650]:         <nova:disk>1</nova:disk>
Jan 27 22:43:32 compute-0 nova_compute[185650]:         <nova:swap>0</nova:swap>
Jan 27 22:43:32 compute-0 nova_compute[185650]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 22:43:32 compute-0 nova_compute[185650]:         <nova:vcpus>1</nova:vcpus>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       </nova:flavor>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <nova:owner>
Jan 27 22:43:32 compute-0 nova_compute[185650]:         <nova:user uuid="7387204f74504e288ed7a5dee73f5083">admin</nova:user>
Jan 27 22:43:32 compute-0 nova_compute[185650]:         <nova:project uuid="8318d5a200d74e4386cf4972db015b75">admin</nova:project>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       </nova:owner>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <nova:root type="image" uuid="7e803ca7-2382-4e5a-95f7-55acaa154415"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <nova:ports>
Jan 27 22:43:32 compute-0 nova_compute[185650]:         <nova:port uuid="389fa2e1-24bb-48bb-a577-b2f7ade8ddc5">
Jan 27 22:43:32 compute-0 nova_compute[185650]:           <nova:ip type="fixed" address="192.168.0.119" ipVersion="4"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:         </nova:port>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       </nova:ports>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     </nova:instance>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   </metadata>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   <sysinfo type="smbios">
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <system>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <entry name="manufacturer">RDO</entry>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <entry name="product">OpenStack Compute</entry>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <entry name="serial">344c74c3-95d6-4f19-993f-b4a89c9d074b</entry>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <entry name="uuid">344c74c3-95d6-4f19-993f-b4a89c9d074b</entry>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <entry name="family">Virtual Machine</entry>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     </system>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   </sysinfo>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   <os>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <boot dev="hd"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <smbios mode="sysinfo"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   </os>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   <features>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <acpi/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <apic/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <vmcoreinfo/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   </features>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   <clock offset="utc">
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <timer name="hpet" present="no"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   </clock>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   <cpu mode="host-model" match="exact">
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   </cpu>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   <devices>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <target dev="vda" bus="virtio"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <target dev="vdb" bus="virtio"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <disk type="file" device="cdrom">
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.config"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <target dev="sda" bus="sata"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <interface type="ethernet">
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <mac address="fa:16:3e:27:72:fe"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <mtu size="1442"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <target dev="tap389fa2e1-24"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     </interface>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <serial type="pty">
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <log file="/var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/console.log" append="off"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     </serial>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <video>
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     </video>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <input type="tablet" bus="usb"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <rng model="virtio">
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <backend model="random">/dev/urandom</backend>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     </rng>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <controller type="usb" index="0"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     <memballoon model="virtio">
Jan 27 22:43:32 compute-0 nova_compute[185650]:       <stats period="10"/>
Jan 27 22:43:32 compute-0 nova_compute[185650]:     </memballoon>
Jan 27 22:43:32 compute-0 nova_compute[185650]:   </devices>
Jan 27 22:43:32 compute-0 nova_compute[185650]: </domain>
Jan 27 22:43:32 compute-0 nova_compute[185650]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.180 185654 DEBUG nova.compute.manager [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Preparing to wait for external event network-vif-plugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.181 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.181 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.181 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.182 185654 DEBUG nova.virt.libvirt.vif [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T22:43:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-rgck83ce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T22:43:25Z,user_data=None,user_id='7387204f74504e288ed7a5dee73f5083',uuid=344c74c3-95d6-4f19-993f-b4a89c9d074b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.183 185654 DEBUG nova.network.os_vif_util [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.183 185654 DEBUG nova.network.os_vif_util [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:72:fe,bridge_name='br-int',has_traffic_filtering=True,id=389fa2e1-24bb-48bb-a577-b2f7ade8ddc5,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap389fa2e1-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.184 185654 DEBUG os_vif [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:72:fe,bridge_name='br-int',has_traffic_filtering=True,id=389fa2e1-24bb-48bb-a577-b2f7ade8ddc5,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap389fa2e1-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.218 185654 DEBUG ovsdbapp.backend.ovs_idl [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.218 185654 DEBUG ovsdbapp.backend.ovs_idl [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.219 185654 DEBUG ovsdbapp.backend.ovs_idl [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.219 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.220 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.220 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.225 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.227 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.230 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.239 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.239 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.240 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.241 185654 INFO oslo.privsep.daemon [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpikiu6_io/privsep.sock']
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.990 185654 INFO oslo.privsep.daemon [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Spawned new privsep daemon via rootwrap
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.822 238648 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.830 238648 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.835 238648 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 27 22:43:32 compute-0 nova_compute[185650]: 2026-01-27 22:43:32.835 238648 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238648
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.322 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.323 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap389fa2e1-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.324 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap389fa2e1-24, col_values=(('external_ids', {'iface-id': '389fa2e1-24bb-48bb-a577-b2f7ade8ddc5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:72:fe', 'vm-uuid': '344c74c3-95d6-4f19-993f-b4a89c9d074b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.328 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:33 compute-0 NetworkManager[56600]: <info>  [1769553813.3296] manager: (tap389fa2e1-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.332 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.343 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.345 185654 INFO os_vif [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:72:fe,bridge_name='br-int',has_traffic_filtering=True,id=389fa2e1-24bb-48bb-a577-b2f7ade8ddc5,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap389fa2e1-24')
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.416 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.416 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.417 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.417 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No VIF found with MAC fa:16:3e:27:72:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 22:43:33 compute-0 nova_compute[185650]: 2026-01-27 22:43:33.418 185654 INFO nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Using config drive
Jan 27 22:43:34 compute-0 nova_compute[185650]: 2026-01-27 22:43:34.251 185654 DEBUG nova.network.neutron [req-2a249825-7a01-4252-ac4e-4baa2feea1d2 req-9777afdd-3a73-4a44-87f0-9cdff14b7f6a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updated VIF entry in instance network info cache for port 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 22:43:34 compute-0 nova_compute[185650]: 2026-01-27 22:43:34.252 185654 DEBUG nova.network.neutron [req-2a249825-7a01-4252-ac4e-4baa2feea1d2 req-9777afdd-3a73-4a44-87f0-9cdff14b7f6a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:43:34 compute-0 nova_compute[185650]: 2026-01-27 22:43:34.267 185654 DEBUG oslo_concurrency.lockutils [req-2a249825-7a01-4252-ac4e-4baa2feea1d2 req-9777afdd-3a73-4a44-87f0-9cdff14b7f6a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:43:34 compute-0 podman[238654]: 2026-01-27 22:43:34.367444918 +0000 UTC m=+0.072348777 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 22:43:34 compute-0 nova_compute[185650]: 2026-01-27 22:43:34.367 185654 INFO nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Creating config drive at /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.config
Jan 27 22:43:34 compute-0 nova_compute[185650]: 2026-01-27 22:43:34.372 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5bixzrlg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:34 compute-0 nova_compute[185650]: 2026-01-27 22:43:34.492 185654 DEBUG oslo_concurrency.processutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5bixzrlg" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:34 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 27 22:43:34 compute-0 kernel: tap389fa2e1-24: entered promiscuous mode
Jan 27 22:43:34 compute-0 NetworkManager[56600]: <info>  [1769553814.6364] manager: (tap389fa2e1-24): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Jan 27 22:43:34 compute-0 ovn_controller[98048]: 2026-01-27T22:43:34Z|00027|binding|INFO|Claiming lport 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 for this chassis.
Jan 27 22:43:34 compute-0 ovn_controller[98048]: 2026-01-27T22:43:34Z|00028|binding|INFO|389fa2e1-24bb-48bb-a577-b2f7ade8ddc5: Claiming fa:16:3e:27:72:fe 192.168.0.119
Jan 27 22:43:34 compute-0 nova_compute[185650]: 2026-01-27 22:43:34.635 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:34 compute-0 nova_compute[185650]: 2026-01-27 22:43:34.642 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:34 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:34.663 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:72:fe 192.168.0.119'], port_security=['fa:16:3e:27:72:fe 192.168.0.119'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.119/24', 'neutron:device_id': '344c74c3-95d6-4f19-993f-b4a89c9d074b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98f694e3-becc-413f-b42b-35a7171f7f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318d5a200d74e4386cf4972db015b75', 'neutron:revision_number': '2', 'neutron:security_group_ids': '597f1057-390b-408a-b8d0-705fb45de27b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d21d3e2-2f64-49c8-bca6-9efc66f5bd67, chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=389fa2e1-24bb-48bb-a577-b2f7ade8ddc5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:43:34 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:34.665 107302 INFO neutron.agent.ovn.metadata.agent [-] Port 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 in datapath 98f694e3-becc-413f-b42b-35a7171f7f96 bound to our chassis
Jan 27 22:43:34 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:34.670 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98f694e3-becc-413f-b42b-35a7171f7f96
Jan 27 22:43:34 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:34.672 107302 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpzkfhxwg8/privsep.sock']
Jan 27 22:43:34 compute-0 systemd-udevd[238699]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 22:43:34 compute-0 NetworkManager[56600]: <info>  [1769553814.7180] device (tap389fa2e1-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 22:43:34 compute-0 NetworkManager[56600]: <info>  [1769553814.7192] device (tap389fa2e1-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 22:43:34 compute-0 systemd-machined[157036]: New machine qemu-1-instance-00000001.
Jan 27 22:43:34 compute-0 nova_compute[185650]: 2026-01-27 22:43:34.740 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:34 compute-0 ovn_controller[98048]: 2026-01-27T22:43:34Z|00029|binding|INFO|Setting lport 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 ovn-installed in OVS
Jan 27 22:43:34 compute-0 ovn_controller[98048]: 2026-01-27T22:43:34Z|00030|binding|INFO|Setting lport 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 up in Southbound
Jan 27 22:43:34 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 27 22:43:34 compute-0 nova_compute[185650]: 2026-01-27 22:43:34.750 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:34 compute-0 nova_compute[185650]: 2026-01-27 22:43:34.856 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:34 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 22:43:35 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 22:43:35 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:35.369 107302 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 22:43:35 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:35.370 107302 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpzkfhxwg8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 22:43:35 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:35.227 238735 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 22:43:35 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:35.231 238735 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 22:43:35 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:35.234 238735 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 27 22:43:35 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:35.234 238735 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238735
Jan 27 22:43:35 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:35.373 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[8006d62b-928b-470c-9e11-513ee82139f3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.470 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769553815.4696262, 344c74c3-95d6-4f19-993f-b4a89c9d074b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.472 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] VM Started (Lifecycle Event)
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.503 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.510 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769553815.469765, 344c74c3-95d6-4f19-993f-b4a89c9d074b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.510 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] VM Paused (Lifecycle Event)
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.534 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.540 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.559 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.622 185654 DEBUG nova.compute.manager [req-fe0ab9ae-758b-4810-836e-549a006bef24 req-f5162b93-c670-481e-885a-bc4d6f6cee64 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Received event network-vif-plugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.623 185654 DEBUG oslo_concurrency.lockutils [req-fe0ab9ae-758b-4810-836e-549a006bef24 req-f5162b93-c670-481e-885a-bc4d6f6cee64 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.624 185654 DEBUG oslo_concurrency.lockutils [req-fe0ab9ae-758b-4810-836e-549a006bef24 req-f5162b93-c670-481e-885a-bc4d6f6cee64 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.624 185654 DEBUG oslo_concurrency.lockutils [req-fe0ab9ae-758b-4810-836e-549a006bef24 req-f5162b93-c670-481e-885a-bc4d6f6cee64 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.625 185654 DEBUG nova.compute.manager [req-fe0ab9ae-758b-4810-836e-549a006bef24 req-f5162b93-c670-481e-885a-bc4d6f6cee64 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Processing event network-vif-plugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.626 185654 DEBUG nova.compute.manager [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.632 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.645 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769553815.6440425, 344c74c3-95d6-4f19-993f-b4a89c9d074b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.645 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] VM Resumed (Lifecycle Event)
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.647 185654 INFO nova.virt.libvirt.driver [-] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Instance spawned successfully.
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.648 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.666 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.672 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.706 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.721 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.722 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.723 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.723 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.724 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.725 185654 DEBUG nova.virt.libvirt.driver [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.783 185654 INFO nova.compute.manager [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Took 10.39 seconds to spawn the instance on the hypervisor.
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.784 185654 DEBUG nova.compute.manager [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.839 185654 INFO nova.compute.manager [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Took 10.79 seconds to build instance.
Jan 27 22:43:35 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:35.854 238735 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:35 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:35.855 238735 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:35 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:35.855 238735 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:35 compute-0 nova_compute[185650]: 2026-01-27 22:43:35.857 185654 DEBUG oslo_concurrency.lockutils [None req-2dcfd6ef-9fbc-4692-86ad-325b8405063e 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:36 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:36.398 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[c6129989-9857-4840-bbd4-74c5daf35184]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:36 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:36.399 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap98f694e3-b1 in ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 22:43:36 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:36.401 238735 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap98f694e3-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 22:43:36 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:36.401 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9cb918-8168-4828-858c-07000b5c1e32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:36 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:36.404 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[e89f0e94-f0de-486a-aa81-571bdbdd0711]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:36 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:36.423 107797 DEBUG oslo.privsep.daemon [-] privsep: reply[1acba436-30cd-48f0-95b5-c8b7682ea781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:36 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:36.457 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6ae246-ced2-452c-ad4b-058d46a801bf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:36 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:36.460 107302 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpaaowz61_/privsep.sock']
Jan 27 22:43:37 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:37.188 107302 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 27 22:43:37 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:37.188 107302 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpaaowz61_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 27 22:43:37 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:37.024 238756 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 27 22:43:37 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:37.029 238756 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 27 22:43:37 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:37.033 238756 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 27 22:43:37 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:37.033 238756 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238756
Jan 27 22:43:37 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:37.192 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[43685711-c4bd-44ff-a8d3-ff61046e5fda]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:37 compute-0 podman[238760]: 2026-01-27 22:43:37.399208665 +0000 UTC m=+0.094701916 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-type=git, managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Jan 27 22:43:37 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:37.689 238756 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:37 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:37.690 238756 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:37 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:37.690 238756 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:37 compute-0 nova_compute[185650]: 2026-01-27 22:43:37.715 185654 DEBUG nova.compute.manager [req-e3a6c5bf-2d64-43e5-9ce7-407640d72d06 req-9102345d-ad07-4c7c-8ec1-3f6a3430171f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Received event network-vif-plugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:43:37 compute-0 nova_compute[185650]: 2026-01-27 22:43:37.716 185654 DEBUG oslo_concurrency.lockutils [req-e3a6c5bf-2d64-43e5-9ce7-407640d72d06 req-9102345d-ad07-4c7c-8ec1-3f6a3430171f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:37 compute-0 nova_compute[185650]: 2026-01-27 22:43:37.716 185654 DEBUG oslo_concurrency.lockutils [req-e3a6c5bf-2d64-43e5-9ce7-407640d72d06 req-9102345d-ad07-4c7c-8ec1-3f6a3430171f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:37 compute-0 nova_compute[185650]: 2026-01-27 22:43:37.716 185654 DEBUG oslo_concurrency.lockutils [req-e3a6c5bf-2d64-43e5-9ce7-407640d72d06 req-9102345d-ad07-4c7c-8ec1-3f6a3430171f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:37 compute-0 nova_compute[185650]: 2026-01-27 22:43:37.717 185654 DEBUG nova.compute.manager [req-e3a6c5bf-2d64-43e5-9ce7-407640d72d06 req-9102345d-ad07-4c7c-8ec1-3f6a3430171f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] No waiting events found dispatching network-vif-plugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 22:43:37 compute-0 nova_compute[185650]: 2026-01-27 22:43:37.717 185654 WARNING nova.compute.manager [req-e3a6c5bf-2d64-43e5-9ce7-407640d72d06 req-9102345d-ad07-4c7c-8ec1-3f6a3430171f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Received unexpected event network-vif-plugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 for instance with vm_state active and task_state None.
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.102 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.102 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.109 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 344c74c3-95d6-4f19-993f-b4a89c9d074b from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.320 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[9e43feb3-1180-42c3-8204-ca9ec753b637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:38 compute-0 nova_compute[185650]: 2026-01-27 22:43:38.328 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.350 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b86320-534a-4b28-bf0c-158cfd18d050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:38 compute-0 NetworkManager[56600]: <info>  [1769553818.3654] manager: (tap98f694e3-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.387 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[0731e214-55c2-4991-8f71-202e7cd1a0df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.391 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[5be2d020-1593-459b-8f79-528af0abd304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:38 compute-0 NetworkManager[56600]: <info>  [1769553818.4168] device (tap98f694e3-b0): carrier: link connected
Jan 27 22:43:38 compute-0 systemd-udevd[238790]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 22:43:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:38.422 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/344c74c3-95d6-4f19-993f-b4a89c9d074b -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}154da27a0715c4500fb4356c9136f029f6352e657551e62d11427d8299e729cc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.421 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[bda2cbe9-66c8-40a5-87f4-f4d94c27fbd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.443 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[ecebd8f9-5a24-4063-a7fd-1d8a117e5fff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98f694e3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:25:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365000, 'reachable_time': 41070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238792, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.461 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[2793e2e5-63ee-47e5-bc54-0cc696a8c6ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:25f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365000, 'tstamp': 365000}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238807, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.481 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e9dc63-d837-4f2e-b2d7-6f88e41b1e39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98f694e3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:25:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365000, 'reachable_time': 41070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238808, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.518 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[a00e8983-d820-4d38-81b1-36d84766e158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.584 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[38f3139e-115b-4094-bd0e-1ced8c7f3aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.585 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f694e3-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.586 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.586 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98f694e3-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:43:38 compute-0 nova_compute[185650]: 2026-01-27 22:43:38.588 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:38 compute-0 NetworkManager[56600]: <info>  [1769553818.5894] manager: (tap98f694e3-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 27 22:43:38 compute-0 kernel: tap98f694e3-b0: entered promiscuous mode
Jan 27 22:43:38 compute-0 nova_compute[185650]: 2026-01-27 22:43:38.598 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.602 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98f694e3-b0, col_values=(('external_ids', {'iface-id': 'acacffcb-4de9-40c5-aeef-3e5766b557e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:43:38 compute-0 nova_compute[185650]: 2026-01-27 22:43:38.604 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:38 compute-0 nova_compute[185650]: 2026-01-27 22:43:38.605 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:38 compute-0 ovn_controller[98048]: 2026-01-27T22:43:38Z|00031|binding|INFO|Releasing lport acacffcb-4de9-40c5-aeef-3e5766b557e0 from this chassis (sb_readonly=0)
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.609 107302 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/98f694e3-becc-413f-b42b-35a7171f7f96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/98f694e3-becc-413f-b42b-35a7171f7f96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 22:43:38 compute-0 nova_compute[185650]: 2026-01-27 22:43:38.621 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.619 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[417492d1-e236-49ee-a37f-1cba3d2d593d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.631 107302 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: global
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     log         /dev/log local0 debug
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     log-tag     haproxy-metadata-proxy-98f694e3-becc-413f-b42b-35a7171f7f96
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     user        root
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     group       root
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     maxconn     1024
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     pidfile     /var/lib/neutron/external/pids/98f694e3-becc-413f-b42b-35a7171f7f96.pid.haproxy
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     daemon
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: defaults
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     log global
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     mode http
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     option httplog
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     option dontlognull
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     option http-server-close
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     option forwardfor
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     retries                 3
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     timeout http-request    30s
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     timeout connect         30s
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     timeout client          32s
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     timeout server          32s
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     timeout http-keep-alive 30s
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: listen listener
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     bind 169.254.169.254:80
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:     http-request add-header X-OVN-Network-ID 98f694e3-becc-413f-b42b-35a7171f7f96
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 22:43:38 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:43:38.632 107302 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'env', 'PROCESS_TAG=haproxy-98f694e3-becc-413f-b42b-35a7171f7f96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/98f694e3-becc-413f-b42b-35a7171f7f96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 22:43:39 compute-0 podman[238840]: 2026-01-27 22:43:39.058951098 +0000 UTC m=+0.063490031 container create 2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:43:39 compute-0 systemd[1]: Started libpod-conmon-2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe.scope.
Jan 27 22:43:39 compute-0 podman[238840]: 2026-01-27 22:43:39.02475726 +0000 UTC m=+0.029296213 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 22:43:39 compute-0 systemd[1]: Started libcrun container.
Jan 27 22:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93608add820367f9d9c7ed63e554c8796492e0eb1461a1e19ba1cd3745b99a2c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 22:43:39 compute-0 podman[238840]: 2026-01-27 22:43:39.180470767 +0000 UTC m=+0.185009730 container init 2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 27 22:43:39 compute-0 podman[238840]: 2026-01-27 22:43:39.194834365 +0000 UTC m=+0.199373298 container start 2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 27 22:43:39 compute-0 neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96[238855]: [NOTICE]   (238859) : New worker (238861) forked
Jan 27 22:43:39 compute-0 neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96[238855]: [NOTICE]   (238859) : Loading success.
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.245 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1728 Content-Type: application/json Date: Tue, 27 Jan 2026 22:43:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b58418bb-dba4-4d61-9683-c32830a335b0 x-openstack-request-id: req-b58418bb-dba4-4d61-9683-c32830a335b0 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.245 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "344c74c3-95d6-4f19-993f-b4a89c9d074b", "name": "test_0", "status": "ACTIVE", "tenant_id": "8318d5a200d74e4386cf4972db015b75", "user_id": "7387204f74504e288ed7a5dee73f5083", "metadata": {}, "hostId": "6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea", "image": {"id": "7e803ca7-2382-4e5a-95f7-55acaa154415", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7e803ca7-2382-4e5a-95f7-55acaa154415"}]}, "flavor": {"id": "c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093"}]}, "created": "2026-01-27T22:43:23Z", "updated": "2026-01-27T22:43:35Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.119", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:27:72:fe"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/344c74c3-95d6-4f19-993f-b4a89c9d074b"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/344c74c3-95d6-4f19-993f-b4a89c9d074b"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T22:43:35.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.245 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/344c74c3-95d6-4f19-993f-b4a89c9d074b used request id req-b58418bb-dba4-4d61-9683-c32830a335b0 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.247 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '344c74c3-95d6-4f19-993f-b4a89c9d074b', 'name': 'test_0', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.247 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.247 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c646060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.247 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c646060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.248 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.249 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T22:43:39.247790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.252 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 344c74c3-95d6-4f19-993f-b4a89c9d074b / tap389fa2e1-24 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.252 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.253 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.253 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.254 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.254 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.254 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.254 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.254 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.255 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T22:43:39.254173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.259 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.259 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.259 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.259 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.259 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.260 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.260 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.260 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.260 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.261 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.261 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.261 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.260 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T22:43:39.259651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.261 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.262 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.262 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T22:43:39.261058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T22:43:39.262137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.326 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.326 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.326 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.327 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.329 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.329 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.329 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.330 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.330 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.330 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.331 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.331 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T22:43:39.328850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T22:43:39.329886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T22:43:39.331130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T22:43:39.331944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.375 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/cpu volume: 3580000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.376 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.376 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.376 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.377 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T22:43:39.376997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.377 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.378 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.378 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.379 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T22:43:39.378625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.379 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 344c74c3-95d6-4f19-993f-b4a89c9d074b: ceilometer.compute.pollsters.NoVolumeException
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.379 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.379 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T22:43:39.380452) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.381 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T22:43:39.382152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.406 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.406 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.406 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.407 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645490>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645490>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.407 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.407 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.408 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.408 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.409 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 415653646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T22:43:39.407571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.409 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.409 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 2767866 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T22:43:39.409005) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.410 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.410 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.410 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.411 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T22:43:39.410395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.411 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.412 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.412 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T22:43:39.411891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.413 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.413 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.413 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T22:43:39.412909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.414 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.414 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.414 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.414 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.414 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.415 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.415 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.415 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.415 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T22:43:39.413898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.415 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.415 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.416 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.416 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.416 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.416 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.417 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.417 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.417 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.417 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.417 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.417 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.417 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.418 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.418 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.418 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.418 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.418 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.418 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.419 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.419 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.419 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.420 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.420 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.420 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.420 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.420 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.420 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.421 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.421 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.421 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.421 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.422 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.424 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.424 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.424 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.424 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.424 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.424 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T22:43:39.415388) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T22:43:39.416846) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T22:43:39.417824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T22:43:39.418663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T22:43:39.419548) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T22:43:39.420473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:43:39.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T22:43:39.421818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:43:39 compute-0 nova_compute[185650]: 2026-01-27 22:43:39.860 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:43 compute-0 nova_compute[185650]: 2026-01-27 22:43:43.332 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:44 compute-0 nova_compute[185650]: 2026-01-27 22:43:44.861 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:44 compute-0 nova_compute[185650]: 2026-01-27 22:43:44.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:43:44 compute-0 nova_compute[185650]: 2026-01-27 22:43:44.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.024 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.025 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.025 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.025 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.134 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.232 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.235 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.319 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.321 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.390 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.392 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.470 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.796 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.798 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5336MB free_disk=72.44320678710938GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.798 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.799 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.875 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.876 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.877 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.918 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating inventory in ProviderTree for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.948 185654 ERROR nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [req-e3055a35-68c3-48f2-8ea7-e8eb3d23d860] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 200c8b8b-d176-4e2d-a773-1ed54a9635a3.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-e3055a35-68c3-48f2-8ea7-e8eb3d23d860"}]}
Jan 27 22:43:45 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.964 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing inventories for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 22:43:46 compute-0 nova_compute[185650]: 2026-01-27 22:43:45.999 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating ProviderTree inventory for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 22:43:46 compute-0 nova_compute[185650]: 2026-01-27 22:43:46.000 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating inventory in ProviderTree for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 22:43:46 compute-0 nova_compute[185650]: 2026-01-27 22:43:46.013 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing aggregate associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 22:43:46 compute-0 nova_compute[185650]: 2026-01-27 22:43:46.031 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing trait associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_AVX,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 22:43:46 compute-0 nova_compute[185650]: 2026-01-27 22:43:46.069 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating inventory in ProviderTree for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 22:43:46 compute-0 nova_compute[185650]: 2026-01-27 22:43:46.109 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updated inventory for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 27 22:43:46 compute-0 nova_compute[185650]: 2026-01-27 22:43:46.110 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 27 22:43:46 compute-0 nova_compute[185650]: 2026-01-27 22:43:46.111 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating inventory in ProviderTree for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 22:43:46 compute-0 nova_compute[185650]: 2026-01-27 22:43:46.136 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:43:46 compute-0 nova_compute[185650]: 2026-01-27 22:43:46.137 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:43:46 compute-0 podman[238886]: 2026-01-27 22:43:46.378411892 +0000 UTC m=+0.081126100 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126)
Jan 27 22:43:47 compute-0 nova_compute[185650]: 2026-01-27 22:43:47.139 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:43:47 compute-0 nova_compute[185650]: 2026-01-27 22:43:47.140 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:43:47 compute-0 nova_compute[185650]: 2026-01-27 22:43:47.140 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:43:47 compute-0 nova_compute[185650]: 2026-01-27 22:43:47.305 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:43:47 compute-0 nova_compute[185650]: 2026-01-27 22:43:47.305 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:43:47 compute-0 nova_compute[185650]: 2026-01-27 22:43:47.306 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:43:47 compute-0 nova_compute[185650]: 2026-01-27 22:43:47.306 185654 DEBUG nova.objects.instance [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.334 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:48 compute-0 podman[238907]: 2026-01-27 22:43:48.359362009 +0000 UTC m=+0.062419732 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.419 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:48 compute-0 NetworkManager[56600]: <info>  [1769553828.4210] manager: (patch-br-int-to-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Jan 27 22:43:48 compute-0 ovn_controller[98048]: 2026-01-27T22:43:48Z|00032|binding|INFO|Releasing lport acacffcb-4de9-40c5-aeef-3e5766b557e0 from this chassis (sb_readonly=0)
Jan 27 22:43:48 compute-0 NetworkManager[56600]: <info>  [1769553828.4235] device (patch-br-int-to-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:43:48 compute-0 NetworkManager[56600]: <warn>  [1769553828.4240] device (patch-br-int-to-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 22:43:48 compute-0 NetworkManager[56600]: <info>  [1769553828.4297] manager: (patch-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Jan 27 22:43:48 compute-0 NetworkManager[56600]: <info>  [1769553828.4323] device (patch-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 27 22:43:48 compute-0 NetworkManager[56600]: <warn>  [1769553828.4324] device (patch-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 27 22:43:48 compute-0 NetworkManager[56600]: <info>  [1769553828.4374] manager: (patch-br-int-to-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Jan 27 22:43:48 compute-0 NetworkManager[56600]: <info>  [1769553828.4408] manager: (patch-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 27 22:43:48 compute-0 NetworkManager[56600]: <info>  [1769553828.4434] device (patch-br-int-to-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 22:43:48 compute-0 NetworkManager[56600]: <info>  [1769553828.4459] device (patch-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.451 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:48 compute-0 ovn_controller[98048]: 2026-01-27T22:43:48Z|00033|binding|INFO|Releasing lport acacffcb-4de9-40c5-aeef-3e5766b557e0 from this chassis (sb_readonly=0)
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.458 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.493 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.515 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.516 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.516 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.517 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.831 185654 DEBUG nova.compute.manager [req-93bb8326-7d15-40e4-a145-0a4c9cb6324c req-71e7e499-7514-4eb4-a3d8-d5fe15a44650 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Received event network-changed-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.833 185654 DEBUG nova.compute.manager [req-93bb8326-7d15-40e4-a145-0a4c9cb6324c req-71e7e499-7514-4eb4-a3d8-d5fe15a44650 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Refreshing instance network info cache due to event network-changed-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.834 185654 DEBUG oslo_concurrency.lockutils [req-93bb8326-7d15-40e4-a145-0a4c9cb6324c req-71e7e499-7514-4eb4-a3d8-d5fe15a44650 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.835 185654 DEBUG oslo_concurrency.lockutils [req-93bb8326-7d15-40e4-a145-0a4c9cb6324c req-71e7e499-7514-4eb4-a3d8-d5fe15a44650 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:43:48 compute-0 nova_compute[185650]: 2026-01-27 22:43:48.836 185654 DEBUG nova.network.neutron [req-93bb8326-7d15-40e4-a145-0a4c9cb6324c req-71e7e499-7514-4eb4-a3d8-d5fe15a44650 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Refreshing network info cache for port 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 22:43:49 compute-0 nova_compute[185650]: 2026-01-27 22:43:49.864 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:49 compute-0 nova_compute[185650]: 2026-01-27 22:43:49.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:43:49 compute-0 nova_compute[185650]: 2026-01-27 22:43:49.995 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:43:49 compute-0 nova_compute[185650]: 2026-01-27 22:43:49.996 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:43:50 compute-0 nova_compute[185650]: 2026-01-27 22:43:50.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:43:50 compute-0 nova_compute[185650]: 2026-01-27 22:43:50.995 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:43:51 compute-0 nova_compute[185650]: 2026-01-27 22:43:51.087 185654 DEBUG nova.network.neutron [req-93bb8326-7d15-40e4-a145-0a4c9cb6324c req-71e7e499-7514-4eb4-a3d8-d5fe15a44650 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updated VIF entry in instance network info cache for port 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 22:43:51 compute-0 nova_compute[185650]: 2026-01-27 22:43:51.088 185654 DEBUG nova.network.neutron [req-93bb8326-7d15-40e4-a145-0a4c9cb6324c req-71e7e499-7514-4eb4-a3d8-d5fe15a44650 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:43:51 compute-0 nova_compute[185650]: 2026-01-27 22:43:51.108 185654 DEBUG oslo_concurrency.lockutils [req-93bb8326-7d15-40e4-a145-0a4c9cb6324c req-71e7e499-7514-4eb4-a3d8-d5fe15a44650 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:43:51 compute-0 podman[238927]: 2026-01-27 22:43:51.382276341 +0000 UTC m=+0.086039746 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:43:53 compute-0 nova_compute[185650]: 2026-01-27 22:43:53.336 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:54 compute-0 podman[238950]: 2026-01-27 22:43:54.423142101 +0000 UTC m=+0.122112806 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 22:43:54 compute-0 nova_compute[185650]: 2026-01-27 22:43:54.866 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:56 compute-0 podman[238971]: 2026-01-27 22:43:56.409808786 +0000 UTC m=+0.111425639 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64, managed_by=edpm_ansible, version=9.4, io.openshift.tags=base rhel9, distribution-scope=public, name=ubi9, config_id=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 22:43:56 compute-0 podman[238972]: 2026-01-27 22:43:56.426145099 +0000 UTC m=+0.119840473 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 22:43:58 compute-0 nova_compute[185650]: 2026-01-27 22:43:58.340 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:43:59 compute-0 podman[201529]: time="2026-01-27T22:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:43:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:43:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4320 "" "Go-http-client/1.1"
Jan 27 22:43:59 compute-0 nova_compute[185650]: 2026-01-27 22:43:59.868 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:01 compute-0 openstack_network_exporter[204648]: ERROR   22:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:44:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:44:01 compute-0 openstack_network_exporter[204648]: ERROR   22:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:44:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:44:03 compute-0 nova_compute[185650]: 2026-01-27 22:44:03.343 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:04.131 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:04.131 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:04.132 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:04 compute-0 nova_compute[185650]: 2026-01-27 22:44:04.871 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:05 compute-0 podman[239014]: 2026-01-27 22:44:05.400219483 +0000 UTC m=+0.100909418 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:44:08 compute-0 nova_compute[185650]: 2026-01-27 22:44:08.346 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:08 compute-0 podman[239037]: 2026-01-27 22:44:08.376850412 +0000 UTC m=+0.083572908 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 22:44:09 compute-0 ovn_controller[98048]: 2026-01-27T22:44:09Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:72:fe 192.168.0.119
Jan 27 22:44:09 compute-0 ovn_controller[98048]: 2026-01-27T22:44:09Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:72:fe 192.168.0.119
Jan 27 22:44:09 compute-0 nova_compute[185650]: 2026-01-27 22:44:09.876 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:13 compute-0 nova_compute[185650]: 2026-01-27 22:44:13.348 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:14 compute-0 nova_compute[185650]: 2026-01-27 22:44:14.879 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:17 compute-0 podman[239070]: 2026-01-27 22:44:17.409542706 +0000 UTC m=+0.095113721 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:44:18 compute-0 nova_compute[185650]: 2026-01-27 22:44:18.352 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:18 compute-0 ovn_controller[98048]: 2026-01-27T22:44:18Z|00034|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Jan 27 22:44:19 compute-0 podman[239088]: 2026-01-27 22:44:19.391980218 +0000 UTC m=+0.080631742 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 22:44:19 compute-0 nova_compute[185650]: 2026-01-27 22:44:19.881 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:22 compute-0 podman[239108]: 2026-01-27 22:44:22.370246535 +0000 UTC m=+0.068209963 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:44:23 compute-0 nova_compute[185650]: 2026-01-27 22:44:23.355 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:24 compute-0 nova_compute[185650]: 2026-01-27 22:44:24.884 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:25 compute-0 podman[239131]: 2026-01-27 22:44:25.02226202 +0000 UTC m=+0.110429675 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi)
Jan 27 22:44:27 compute-0 podman[239151]: 2026-01-27 22:44:27.422730607 +0000 UTC m=+0.126719298 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release-0.7.12=, release=1214.1726694543, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30)
Jan 27 22:44:27 compute-0 podman[239152]: 2026-01-27 22:44:27.442996167 +0000 UTC m=+0.136330982 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 22:44:28 compute-0 nova_compute[185650]: 2026-01-27 22:44:28.359 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:29 compute-0 podman[201529]: time="2026-01-27T22:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:44:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:44:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4329 "" "Go-http-client/1.1"
Jan 27 22:44:29 compute-0 nova_compute[185650]: 2026-01-27 22:44:29.886 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:31 compute-0 openstack_network_exporter[204648]: ERROR   22:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:44:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:44:31 compute-0 openstack_network_exporter[204648]: ERROR   22:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:44:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:44:32 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:32.122 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:41:f8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '26:ae:8e:b8:80:28'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:44:32 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:32.123 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 22:44:32 compute-0 nova_compute[185650]: 2026-01-27 22:44:32.125 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:33 compute-0 nova_compute[185650]: 2026-01-27 22:44:33.362 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:34 compute-0 nova_compute[185650]: 2026-01-27 22:44:34.888 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:36 compute-0 podman[239194]: 2026-01-27 22:44:36.364005711 +0000 UTC m=+0.062448293 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:44:36 compute-0 nova_compute[185650]: 2026-01-27 22:44:36.591 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:36 compute-0 nova_compute[185650]: 2026-01-27 22:44:36.592 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:36 compute-0 nova_compute[185650]: 2026-01-27 22:44:36.605 185654 DEBUG nova.compute.manager [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 22:44:36 compute-0 nova_compute[185650]: 2026-01-27 22:44:36.739 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:36 compute-0 nova_compute[185650]: 2026-01-27 22:44:36.739 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:36 compute-0 nova_compute[185650]: 2026-01-27 22:44:36.750 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 22:44:36 compute-0 nova_compute[185650]: 2026-01-27 22:44:36.751 185654 INFO nova.compute.claims [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Claim successful on node compute-0.ctlplane.example.com
Jan 27 22:44:36 compute-0 nova_compute[185650]: 2026-01-27 22:44:36.922 185654 DEBUG nova.compute.provider_tree [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:44:36 compute-0 nova_compute[185650]: 2026-01-27 22:44:36.935 185654 DEBUG nova.scheduler.client.report [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:44:36 compute-0 nova_compute[185650]: 2026-01-27 22:44:36.965 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:36 compute-0 nova_compute[185650]: 2026-01-27 22:44:36.966 185654 DEBUG nova.compute.manager [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.000 185654 DEBUG nova.compute.manager [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.001 185654 DEBUG nova.network.neutron [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.019 185654 INFO nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.051 185654 DEBUG nova.compute.manager [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.127 185654 DEBUG nova.compute.manager [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.128 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.129 185654 INFO nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Creating image(s)
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.129 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "/var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.130 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.130 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.142 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.233 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.235 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5c90c71330689347f3144a95195c41f3e929b39e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.235 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5c90c71330689347f3144a95195c41f3e929b39e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.246 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.316 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.317 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e,backing_fmt=raw /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.366 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e,backing_fmt=raw /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.367 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5c90c71330689347f3144a95195c41f3e929b39e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.368 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.440 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.442 185654 DEBUG nova.virt.disk.api [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Checking if we can resize image /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.442 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.507 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.508 185654 DEBUG nova.virt.disk.api [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Cannot resize image /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.509 185654 DEBUG nova.objects.instance [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'migration_context' on Instance uuid d2c3fc6f-7629-469b-be68-8fe07acabe0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.525 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "/var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.526 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.527 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.548 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.604 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.605 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.606 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.618 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.673 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.674 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.710 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.711 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.712 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.767 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.768 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.768 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Ensure instance console log exists: /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.768 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.769 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:37 compute-0 nova_compute[185650]: 2026-01-27 22:44:37.769 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:38 compute-0 nova_compute[185650]: 2026-01-27 22:44:38.365 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:39 compute-0 podman[239245]: 2026-01-27 22:44:39.389770939 +0000 UTC m=+0.089492425 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=)
Jan 27 22:44:39 compute-0 nova_compute[185650]: 2026-01-27 22:44:39.890 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:39 compute-0 nova_compute[185650]: 2026-01-27 22:44:39.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:39 compute-0 nova_compute[185650]: 2026-01-27 22:44:39.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 22:44:40 compute-0 nova_compute[185650]: 2026-01-27 22:44:40.016 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:40 compute-0 nova_compute[185650]: 2026-01-27 22:44:40.738 185654 DEBUG nova.network.neutron [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Successfully updated port: 2083900f-b759-4c97-8c34-5ad3832f0446 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 22:44:40 compute-0 nova_compute[185650]: 2026-01-27 22:44:40.753 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:44:40 compute-0 nova_compute[185650]: 2026-01-27 22:44:40.754 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquired lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:44:40 compute-0 nova_compute[185650]: 2026-01-27 22:44:40.754 185654 DEBUG nova.network.neutron [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 22:44:40 compute-0 nova_compute[185650]: 2026-01-27 22:44:40.842 185654 DEBUG nova.compute.manager [req-ea99004c-8803-49e1-bdba-aa0aa64f10fa req-9a79de5a-b1c2-4228-846e-b85b5c1ed35c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Received event network-changed-2083900f-b759-4c97-8c34-5ad3832f0446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:44:40 compute-0 nova_compute[185650]: 2026-01-27 22:44:40.842 185654 DEBUG nova.compute.manager [req-ea99004c-8803-49e1-bdba-aa0aa64f10fa req-9a79de5a-b1c2-4228-846e-b85b5c1ed35c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Refreshing instance network info cache due to event network-changed-2083900f-b759-4c97-8c34-5ad3832f0446. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 22:44:40 compute-0 nova_compute[185650]: 2026-01-27 22:44:40.843 185654 DEBUG oslo_concurrency.lockutils [req-ea99004c-8803-49e1-bdba-aa0aa64f10fa req-9a79de5a-b1c2-4228-846e-b85b5c1ed35c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:44:40 compute-0 nova_compute[185650]: 2026-01-27 22:44:40.904 185654 DEBUG nova.network.neutron [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.030 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.030 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.048 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 22:44:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:42.125 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.405 185654 DEBUG nova.network.neutron [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updating instance_info_cache with network_info: [{"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.449 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Releasing lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.449 185654 DEBUG nova.compute.manager [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Instance network_info: |[{"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.450 185654 DEBUG oslo_concurrency.lockutils [req-ea99004c-8803-49e1-bdba-aa0aa64f10fa req-9a79de5a-b1c2-4228-846e-b85b5c1ed35c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.450 185654 DEBUG nova.network.neutron [req-ea99004c-8803-49e1-bdba-aa0aa64f10fa req-9a79de5a-b1c2-4228-846e-b85b5c1ed35c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Refreshing network info cache for port 2083900f-b759-4c97-8c34-5ad3832f0446 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.453 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Start _get_guest_xml network_info=[{"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T22:42:20Z,direct_url=<?>,disk_format='qcow2',id=7e803ca7-2382-4e5a-95f7-55acaa154415,min_disk=0,min_ram=0,name='cirros',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T22:42:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}], 'ephemerals': [{'size': 1, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.459 185654 WARNING nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.464 185654 DEBUG nova.virt.libvirt.host [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.465 185654 DEBUG nova.virt.libvirt.host [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.472 185654 DEBUG nova.virt.libvirt.host [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.473 185654 DEBUG nova.virt.libvirt.host [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.473 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.474 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T22:42:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T22:42:20Z,direct_url=<?>,disk_format='qcow2',id=7e803ca7-2382-4e5a-95f7-55acaa154415,min_disk=0,min_ram=0,name='cirros',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T22:42:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.474 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.474 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.475 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.475 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.475 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.476 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.476 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.476 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.477 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.477 185654 DEBUG nova.virt.hardware [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.480 185654 DEBUG nova.virt.libvirt.vif [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T22:44:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj',id=2,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='3b67098f-eb50-41e2-8c8a-348367561673'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-57dydbj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T22:44:37Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA3MDk3MDQ0NDExMDU0NDYzNjU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDcwOTcwNDQ0MTEwNTQ0NjM2NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA3MDk3MDQ0NDExMDU0NDYzNjU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 27 22:44:42 compute-0 nova_compute[185650]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDcwOTcwNDQ0MTEwNTQ0NjM2NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA3MDk3MDQ0NDExMDU0NDYzNjU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0tLQo=',user_id='7387204f74504e288ed7a5dee73f5083',uuid=d2c3fc6f-7629-469b-be68-8fe07acabe0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.481 185654 DEBUG nova.network.os_vif_util [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.482 185654 DEBUG nova.network.os_vif_util [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:7c:56,bridge_name='br-int',has_traffic_filtering=True,id=2083900f-b759-4c97-8c34-5ad3832f0446,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2083900f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.483 185654 DEBUG nova.objects.instance [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'pci_devices' on Instance uuid d2c3fc6f-7629-469b-be68-8fe07acabe0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.496 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] End _get_guest_xml xml=<domain type="kvm">
Jan 27 22:44:42 compute-0 nova_compute[185650]:   <uuid>d2c3fc6f-7629-469b-be68-8fe07acabe0f</uuid>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   <name>instance-00000002</name>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   <memory>524288</memory>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   <vcpu>1</vcpu>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   <metadata>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <nova:name>vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj</nova:name>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <nova:creationTime>2026-01-27 22:44:42</nova:creationTime>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <nova:flavor name="m1.small">
Jan 27 22:44:42 compute-0 nova_compute[185650]:         <nova:memory>512</nova:memory>
Jan 27 22:44:42 compute-0 nova_compute[185650]:         <nova:disk>1</nova:disk>
Jan 27 22:44:42 compute-0 nova_compute[185650]:         <nova:swap>0</nova:swap>
Jan 27 22:44:42 compute-0 nova_compute[185650]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 22:44:42 compute-0 nova_compute[185650]:         <nova:vcpus>1</nova:vcpus>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       </nova:flavor>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <nova:owner>
Jan 27 22:44:42 compute-0 nova_compute[185650]:         <nova:user uuid="7387204f74504e288ed7a5dee73f5083">admin</nova:user>
Jan 27 22:44:42 compute-0 nova_compute[185650]:         <nova:project uuid="8318d5a200d74e4386cf4972db015b75">admin</nova:project>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       </nova:owner>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <nova:root type="image" uuid="7e803ca7-2382-4e5a-95f7-55acaa154415"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <nova:ports>
Jan 27 22:44:42 compute-0 nova_compute[185650]:         <nova:port uuid="2083900f-b759-4c97-8c34-5ad3832f0446">
Jan 27 22:44:42 compute-0 nova_compute[185650]:           <nova:ip type="fixed" address="192.168.0.225" ipVersion="4"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:         </nova:port>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       </nova:ports>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     </nova:instance>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   </metadata>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   <sysinfo type="smbios">
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <system>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <entry name="manufacturer">RDO</entry>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <entry name="product">OpenStack Compute</entry>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <entry name="serial">d2c3fc6f-7629-469b-be68-8fe07acabe0f</entry>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <entry name="uuid">d2c3fc6f-7629-469b-be68-8fe07acabe0f</entry>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <entry name="family">Virtual Machine</entry>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     </system>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   </sysinfo>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   <os>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <boot dev="hd"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <smbios mode="sysinfo"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   </os>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   <features>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <acpi/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <apic/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <vmcoreinfo/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   </features>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   <clock offset="utc">
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <timer name="hpet" present="no"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   </clock>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   <cpu mode="host-model" match="exact">
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   </cpu>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   <devices>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <target dev="vda" bus="virtio"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <target dev="vdb" bus="virtio"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <disk type="file" device="cdrom">
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.config"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <target dev="sda" bus="sata"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <interface type="ethernet">
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <mac address="fa:16:3e:27:7c:56"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <mtu size="1442"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <target dev="tap2083900f-b7"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     </interface>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <serial type="pty">
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <log file="/var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/console.log" append="off"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     </serial>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <video>
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     </video>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <input type="tablet" bus="usb"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <rng model="virtio">
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <backend model="random">/dev/urandom</backend>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     </rng>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <controller type="usb" index="0"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     <memballoon model="virtio">
Jan 27 22:44:42 compute-0 nova_compute[185650]:       <stats period="10"/>
Jan 27 22:44:42 compute-0 nova_compute[185650]:     </memballoon>
Jan 27 22:44:42 compute-0 nova_compute[185650]:   </devices>
Jan 27 22:44:42 compute-0 nova_compute[185650]: </domain>
Jan 27 22:44:42 compute-0 nova_compute[185650]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.498 185654 DEBUG nova.compute.manager [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Preparing to wait for external event network-vif-plugged-2083900f-b759-4c97-8c34-5ad3832f0446 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.498 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.499 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.499 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.500 185654 DEBUG nova.virt.libvirt.vif [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T22:44:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj',id=2,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='3b67098f-eb50-41e2-8c8a-348367561673'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-57dydbj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T22:44:37Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA3MDk3MDQ0NDExMDU0NDYzNjU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDcwOTcwNDQ0MTEwNTQ0NjM2NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA3MDk3MDQ0NDExMDU0NDYzNjU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 27 22:44:42 compute-0 nova_compute[185650]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDcwOTcwNDQ0MTEwNTQ0NjM2NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA3MDk3MDQ0NDExMDU0NDYzNjU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0tLQo=',user_id='7387204f74504e288ed7a5dee73f5083',uuid=d2c3fc6f-7629-469b-be68-8fe07acabe0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.500 185654 DEBUG nova.network.os_vif_util [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.501 185654 DEBUG nova.network.os_vif_util [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:7c:56,bridge_name='br-int',has_traffic_filtering=True,id=2083900f-b759-4c97-8c34-5ad3832f0446,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2083900f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.501 185654 DEBUG os_vif [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:7c:56,bridge_name='br-int',has_traffic_filtering=True,id=2083900f-b759-4c97-8c34-5ad3832f0446,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2083900f-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.502 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.502 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.503 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.505 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.506 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2083900f-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.506 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2083900f-b7, col_values=(('external_ids', {'iface-id': '2083900f-b759-4c97-8c34-5ad3832f0446', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:7c:56', 'vm-uuid': 'd2c3fc6f-7629-469b-be68-8fe07acabe0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.508 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:42 compute-0 NetworkManager[56600]: <info>  [1769553882.5099] manager: (tap2083900f-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.510 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.516 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.517 185654 INFO os_vif [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:7c:56,bridge_name='br-int',has_traffic_filtering=True,id=2083900f-b759-4c97-8c34-5ad3832f0446,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2083900f-b7')
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.576 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.577 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.577 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.577 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No VIF found with MAC fa:16:3e:27:7c:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 22:44:42 compute-0 nova_compute[185650]: 2026-01-27 22:44:42.577 185654 INFO nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Using config drive
Jan 27 22:44:42 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 22:44:42.480 185654 DEBUG nova.virt.libvirt.vif [None req-2f7a7fa1-15 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 22:44:42 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 22:44:42.500 185654 DEBUG nova.virt.libvirt.vif [None req-2f7a7fa1-15 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.327 185654 INFO nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Creating config drive at /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.config
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.333 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1dc719ph execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.459 185654 DEBUG oslo_concurrency.processutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1dc719ph" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:43 compute-0 kernel: tap2083900f-b7: entered promiscuous mode
Jan 27 22:44:43 compute-0 NetworkManager[56600]: <info>  [1769553883.5255] manager: (tap2083900f-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.527 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:43 compute-0 ovn_controller[98048]: 2026-01-27T22:44:43Z|00035|binding|INFO|Claiming lport 2083900f-b759-4c97-8c34-5ad3832f0446 for this chassis.
Jan 27 22:44:43 compute-0 ovn_controller[98048]: 2026-01-27T22:44:43Z|00036|binding|INFO|2083900f-b759-4c97-8c34-5ad3832f0446: Claiming fa:16:3e:27:7c:56 192.168.0.225
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.534 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:7c:56 192.168.0.225'], port_security=['fa:16:3e:27:7c:56 192.168.0.225'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e3ismbxiivp3-qxfwvjemo3rq-sawqp3hw5btx-port-crs66lsbh5mi', 'neutron:cidrs': '192.168.0.225/24', 'neutron:device_id': 'd2c3fc6f-7629-469b-be68-8fe07acabe0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98f694e3-becc-413f-b42b-35a7171f7f96', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e3ismbxiivp3-qxfwvjemo3rq-sawqp3hw5btx-port-crs66lsbh5mi', 'neutron:project_id': '8318d5a200d74e4386cf4972db015b75', 'neutron:revision_number': '2', 'neutron:security_group_ids': '597f1057-390b-408a-b8d0-705fb45de27b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d21d3e2-2f64-49c8-bca6-9efc66f5bd67, chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=2083900f-b759-4c97-8c34-5ad3832f0446) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.536 107302 INFO neutron.agent.ovn.metadata.agent [-] Port 2083900f-b759-4c97-8c34-5ad3832f0446 in datapath 98f694e3-becc-413f-b42b-35a7171f7f96 bound to our chassis
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.537 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98f694e3-becc-413f-b42b-35a7171f7f96
Jan 27 22:44:43 compute-0 ovn_controller[98048]: 2026-01-27T22:44:43Z|00037|binding|INFO|Setting lport 2083900f-b759-4c97-8c34-5ad3832f0446 ovn-installed in OVS
Jan 27 22:44:43 compute-0 ovn_controller[98048]: 2026-01-27T22:44:43Z|00038|binding|INFO|Setting lport 2083900f-b759-4c97-8c34-5ad3832f0446 up in Southbound
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.543 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.559 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[58d98cd3-7dcf-4320-b743-22a29db437d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:44:43 compute-0 systemd-machined[157036]: New machine qemu-2-instance-00000002.
Jan 27 22:44:43 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 27 22:44:43 compute-0 systemd-udevd[239291]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.585 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[395f8a0f-4673-49ae-b15a-67793edff26b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.589 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[3bce86ee-ce29-4645-a473-5056a0ef50c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:44:43 compute-0 NetworkManager[56600]: <info>  [1769553883.5976] device (tap2083900f-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 22:44:43 compute-0 NetworkManager[56600]: <info>  [1769553883.5989] device (tap2083900f-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.621 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[41f1d205-7a34-4f5b-bb90-92047c882fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.640 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a1ca9a-d5f4-426a-9ac8-a4f237994309]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98f694e3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:25:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365000, 'reachable_time': 21737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239300, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.656 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[af76aa12-e44b-4fe9-80ef-9e9fb79c8b02]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365013, 'tstamp': 365013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239302, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365017, 'tstamp': 365017}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239302, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.658 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f694e3-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.660 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.661 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.661 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98f694e3-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.661 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.662 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98f694e3-b0, col_values=(('external_ids', {'iface-id': 'acacffcb-4de9-40c5-aeef-3e5766b557e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:44:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:44:43.662 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.853 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769553883.8534117, d2c3fc6f-7629-469b-be68-8fe07acabe0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.854 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] VM Started (Lifecycle Event)
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.871 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.876 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769553883.853567, d2c3fc6f-7629-469b-be68-8fe07acabe0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.877 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] VM Paused (Lifecycle Event)
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.895 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.900 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 22:44:43 compute-0 nova_compute[185650]: 2026-01-27 22:44:43.919 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.255 185654 DEBUG nova.compute.manager [req-4bbee9ac-cba2-49b6-9e34-9ed65d442c89 req-3920651b-3fac-4c77-91d1-a694c659d1cf b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Received event network-vif-plugged-2083900f-b759-4c97-8c34-5ad3832f0446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.257 185654 DEBUG oslo_concurrency.lockutils [req-4bbee9ac-cba2-49b6-9e34-9ed65d442c89 req-3920651b-3fac-4c77-91d1-a694c659d1cf b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.257 185654 DEBUG oslo_concurrency.lockutils [req-4bbee9ac-cba2-49b6-9e34-9ed65d442c89 req-3920651b-3fac-4c77-91d1-a694c659d1cf b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.258 185654 DEBUG oslo_concurrency.lockutils [req-4bbee9ac-cba2-49b6-9e34-9ed65d442c89 req-3920651b-3fac-4c77-91d1-a694c659d1cf b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.258 185654 DEBUG nova.compute.manager [req-4bbee9ac-cba2-49b6-9e34-9ed65d442c89 req-3920651b-3fac-4c77-91d1-a694c659d1cf b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Processing event network-vif-plugged-2083900f-b759-4c97-8c34-5ad3832f0446 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.259 185654 DEBUG nova.compute.manager [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.263 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769553884.262927, d2c3fc6f-7629-469b-be68-8fe07acabe0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.263 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] VM Resumed (Lifecycle Event)
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.265 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.269 185654 INFO nova.virt.libvirt.driver [-] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Instance spawned successfully.
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.270 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.286 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.292 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.300 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.300 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.301 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.301 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.302 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.302 185654 DEBUG nova.virt.libvirt.driver [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.338 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.373 185654 INFO nova.compute.manager [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Took 7.25 seconds to spawn the instance on the hypervisor.
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.374 185654 DEBUG nova.compute.manager [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.428 185654 DEBUG nova.network.neutron [req-ea99004c-8803-49e1-bdba-aa0aa64f10fa req-9a79de5a-b1c2-4228-846e-b85b5c1ed35c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updated VIF entry in instance network info cache for port 2083900f-b759-4c97-8c34-5ad3832f0446. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.429 185654 DEBUG nova.network.neutron [req-ea99004c-8803-49e1-bdba-aa0aa64f10fa req-9a79de5a-b1c2-4228-846e-b85b5c1ed35c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updating instance_info_cache with network_info: [{"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.454 185654 DEBUG oslo_concurrency.lockutils [req-ea99004c-8803-49e1-bdba-aa0aa64f10fa req-9a79de5a-b1c2-4228-846e-b85b5c1ed35c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.455 185654 INFO nova.compute.manager [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Took 7.75 seconds to build instance.
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.473 185654 DEBUG oslo_concurrency.lockutils [None req-2f7a7fa1-1507-417b-b85c-84aabbeeba78 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:44 compute-0 nova_compute[185650]: 2026-01-27 22:44:44.894 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.011 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.169 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.170 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.171 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.172 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.253 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.313 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.315 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.383 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.385 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.443 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.444 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.506 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.513 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.569 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.570 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.639 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.640 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.707 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.709 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:44:45 compute-0 nova_compute[185650]: 2026-01-27 22:44:45.770 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.116 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.118 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5258MB free_disk=72.42142105102539GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.118 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.119 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.295 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.296 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance d2c3fc6f-7629-469b-be68-8fe07acabe0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.296 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.297 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.319 185654 DEBUG nova.compute.manager [req-46d2e2ca-d5fc-4d86-a0e2-4aaa6031f025 req-63ff02f3-e426-4df9-97cd-4a10b9765434 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Received event network-vif-plugged-2083900f-b759-4c97-8c34-5ad3832f0446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.320 185654 DEBUG oslo_concurrency.lockutils [req-46d2e2ca-d5fc-4d86-a0e2-4aaa6031f025 req-63ff02f3-e426-4df9-97cd-4a10b9765434 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.321 185654 DEBUG oslo_concurrency.lockutils [req-46d2e2ca-d5fc-4d86-a0e2-4aaa6031f025 req-63ff02f3-e426-4df9-97cd-4a10b9765434 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.321 185654 DEBUG oslo_concurrency.lockutils [req-46d2e2ca-d5fc-4d86-a0e2-4aaa6031f025 req-63ff02f3-e426-4df9-97cd-4a10b9765434 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.321 185654 DEBUG nova.compute.manager [req-46d2e2ca-d5fc-4d86-a0e2-4aaa6031f025 req-63ff02f3-e426-4df9-97cd-4a10b9765434 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] No waiting events found dispatching network-vif-plugged-2083900f-b759-4c97-8c34-5ad3832f0446 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.322 185654 WARNING nova.compute.manager [req-46d2e2ca-d5fc-4d86-a0e2-4aaa6031f025 req-63ff02f3-e426-4df9-97cd-4a10b9765434 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Received unexpected event network-vif-plugged-2083900f-b759-4c97-8c34-5ad3832f0446 for instance with vm_state active and task_state None.
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.396 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.408 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.426 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:44:46 compute-0 nova_compute[185650]: 2026-01-27 22:44:46.427 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:47 compute-0 nova_compute[185650]: 2026-01-27 22:44:47.509 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:48 compute-0 podman[239337]: 2026-01-27 22:44:48.393167507 +0000 UTC m=+0.099247934 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 22:44:48 compute-0 nova_compute[185650]: 2026-01-27 22:44:48.409 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:48 compute-0 nova_compute[185650]: 2026-01-27 22:44:48.410 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:44:48 compute-0 nova_compute[185650]: 2026-01-27 22:44:48.410 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:44:49 compute-0 nova_compute[185650]: 2026-01-27 22:44:49.094 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:44:49 compute-0 nova_compute[185650]: 2026-01-27 22:44:49.095 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:44:49 compute-0 nova_compute[185650]: 2026-01-27 22:44:49.095 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:44:49 compute-0 nova_compute[185650]: 2026-01-27 22:44:49.096 185654 DEBUG nova.objects.instance [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:44:49 compute-0 nova_compute[185650]: 2026-01-27 22:44:49.897 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:50 compute-0 podman[239357]: 2026-01-27 22:44:50.376595042 +0000 UTC m=+0.078524720 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:44:50 compute-0 nova_compute[185650]: 2026-01-27 22:44:50.435 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:44:50 compute-0 nova_compute[185650]: 2026-01-27 22:44:50.451 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:44:50 compute-0 nova_compute[185650]: 2026-01-27 22:44:50.452 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:44:50 compute-0 nova_compute[185650]: 2026-01-27 22:44:50.452 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:50 compute-0 nova_compute[185650]: 2026-01-27 22:44:50.452 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:50 compute-0 nova_compute[185650]: 2026-01-27 22:44:50.453 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:44:50 compute-0 nova_compute[185650]: 2026-01-27 22:44:50.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:50 compute-0 nova_compute[185650]: 2026-01-27 22:44:50.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:51 compute-0 nova_compute[185650]: 2026-01-27 22:44:51.015 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:51 compute-0 nova_compute[185650]: 2026-01-27 22:44:51.016 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:51 compute-0 nova_compute[185650]: 2026-01-27 22:44:51.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:51 compute-0 nova_compute[185650]: 2026-01-27 22:44:51.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:52 compute-0 nova_compute[185650]: 2026-01-27 22:44:52.513 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:53 compute-0 podman[239374]: 2026-01-27 22:44:53.368166395 +0000 UTC m=+0.073689996 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:44:54 compute-0 nova_compute[185650]: 2026-01-27 22:44:54.899 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:55 compute-0 podman[239399]: 2026-01-27 22:44:55.39376169 +0000 UTC m=+0.096695858 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 22:44:55 compute-0 nova_compute[185650]: 2026-01-27 22:44:55.521 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:44:55 compute-0 nova_compute[185650]: 2026-01-27 22:44:55.541 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Triggering sync for uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 22:44:55 compute-0 nova_compute[185650]: 2026-01-27 22:44:55.541 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Triggering sync for uuid d2c3fc6f-7629-469b-be68-8fe07acabe0f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 22:44:55 compute-0 nova_compute[185650]: 2026-01-27 22:44:55.541 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:55 compute-0 nova_compute[185650]: 2026-01-27 22:44:55.542 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:55 compute-0 nova_compute[185650]: 2026-01-27 22:44:55.542 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:44:55 compute-0 nova_compute[185650]: 2026-01-27 22:44:55.543 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:44:55 compute-0 nova_compute[185650]: 2026-01-27 22:44:55.579 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:55 compute-0 nova_compute[185650]: 2026-01-27 22:44:55.580 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:44:57 compute-0 nova_compute[185650]: 2026-01-27 22:44:57.516 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:44:58 compute-0 podman[239418]: 2026-01-27 22:44:58.412615952 +0000 UTC m=+0.112635220 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, config_id=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30)
Jan 27 22:44:58 compute-0 podman[239419]: 2026-01-27 22:44:58.438433958 +0000 UTC m=+0.134806009 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 22:44:59 compute-0 podman[201529]: time="2026-01-27T22:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:44:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:44:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4343 "" "Go-http-client/1.1"
Jan 27 22:44:59 compute-0 nova_compute[185650]: 2026-01-27 22:44:59.901 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:01 compute-0 openstack_network_exporter[204648]: ERROR   22:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:45:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:45:01 compute-0 openstack_network_exporter[204648]: ERROR   22:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:45:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:45:02 compute-0 nova_compute[185650]: 2026-01-27 22:45:02.519 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:45:04.132 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:45:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:45:04.133 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:45:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:45:04.133 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:45:04 compute-0 nova_compute[185650]: 2026-01-27 22:45:04.905 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:07 compute-0 podman[239462]: 2026-01-27 22:45:07.381903873 +0000 UTC m=+0.088268248 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 22:45:07 compute-0 nova_compute[185650]: 2026-01-27 22:45:07.521 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:09 compute-0 nova_compute[185650]: 2026-01-27 22:45:09.910 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:10 compute-0 podman[239486]: 2026-01-27 22:45:10.414350458 +0000 UTC m=+0.102868920 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6)
Jan 27 22:45:12 compute-0 nova_compute[185650]: 2026-01-27 22:45:12.523 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:13 compute-0 ovn_controller[98048]: 2026-01-27T22:45:13Z|00039|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Jan 27 22:45:14 compute-0 nova_compute[185650]: 2026-01-27 22:45:14.914 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:17 compute-0 nova_compute[185650]: 2026-01-27 22:45:17.525 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:19 compute-0 podman[239524]: 2026-01-27 22:45:19.392885469 +0000 UTC m=+0.092239442 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0)
Jan 27 22:45:19 compute-0 ovn_controller[98048]: 2026-01-27T22:45:19Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:7c:56 192.168.0.225
Jan 27 22:45:19 compute-0 ovn_controller[98048]: 2026-01-27T22:45:19Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:7c:56 192.168.0.225
Jan 27 22:45:19 compute-0 nova_compute[185650]: 2026-01-27 22:45:19.917 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:21 compute-0 podman[239544]: 2026-01-27 22:45:21.3700744 +0000 UTC m=+0.074920699 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 22:45:22 compute-0 nova_compute[185650]: 2026-01-27 22:45:22.528 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:24 compute-0 podman[239562]: 2026-01-27 22:45:24.378140373 +0000 UTC m=+0.081648465 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:45:24 compute-0 nova_compute[185650]: 2026-01-27 22:45:24.921 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:26 compute-0 podman[239585]: 2026-01-27 22:45:26.394916092 +0000 UTC m=+0.083924221 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 22:45:27 compute-0 nova_compute[185650]: 2026-01-27 22:45:27.529 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:29 compute-0 podman[239603]: 2026-01-27 22:45:29.393904573 +0000 UTC m=+0.093766047 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, release=1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release-0.7.12=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 22:45:29 compute-0 podman[239604]: 2026-01-27 22:45:29.476457383 +0000 UTC m=+0.165637157 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:45:29 compute-0 podman[201529]: time="2026-01-27T22:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:45:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:45:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Jan 27 22:45:29 compute-0 nova_compute[185650]: 2026-01-27 22:45:29.924 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:31 compute-0 openstack_network_exporter[204648]: ERROR   22:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:45:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:45:31 compute-0 openstack_network_exporter[204648]: ERROR   22:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:45:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:45:32 compute-0 nova_compute[185650]: 2026-01-27 22:45:32.532 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:34 compute-0 nova_compute[185650]: 2026-01-27 22:45:34.927 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:37 compute-0 nova_compute[185650]: 2026-01-27 22:45:37.534 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.102 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.103 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b13faa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.108 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance d2c3fc6f-7629-469b-be68-8fe07acabe0f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 22:45:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:38.109 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/d2c3fc6f-7629-469b-be68-8fe07acabe0f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}154da27a0715c4500fb4356c9136f029f6352e657551e62d11427d8299e729cc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 22:45:38 compute-0 podman[239646]: 2026-01-27 22:45:38.378248569 +0000 UTC m=+0.077308080 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.241 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 27 Jan 2026 22:45:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-effa23d4-d065-4307-b880-dfaec7bb1225 x-openstack-request-id: req-effa23d4-d065-4307-b880-dfaec7bb1225 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.241 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "d2c3fc6f-7629-469b-be68-8fe07acabe0f", "name": "vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj", "status": "ACTIVE", "tenant_id": "8318d5a200d74e4386cf4972db015b75", "user_id": "7387204f74504e288ed7a5dee73f5083", "metadata": {"metering.server_group": "3b67098f-eb50-41e2-8c8a-348367561673"}, "hostId": "6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea", "image": {"id": "7e803ca7-2382-4e5a-95f7-55acaa154415", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7e803ca7-2382-4e5a-95f7-55acaa154415"}]}, "flavor": {"id": "c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093"}]}, "created": "2026-01-27T22:44:35Z", "updated": "2026-01-27T22:44:44Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.225", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:27:7c:56"}, {"version": 4, "addr": "192.168.122.212", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:27:7c:56"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/d2c3fc6f-7629-469b-be68-8fe07acabe0f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/d2c3fc6f-7629-469b-be68-8fe07acabe0f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T22:44:44.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.241 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/d2c3fc6f-7629-469b-be68-8fe07acabe0f used request id req-effa23d4-d065-4307-b880-dfaec7bb1225 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.243 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd2c3fc6f-7629-469b-be68-8fe07acabe0f', 'name': 'vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.246 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '344c74c3-95d6-4f19-993f-b4a89c9d074b', 'name': 'test_0', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.246 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.246 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c646060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.246 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c646060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.246 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.247 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T22:45:39.246868) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.249 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for d2c3fc6f-7629-469b-be68-8fe07acabe0f / tap2083900f-b7 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.250 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.253 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.254 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.254 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.254 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.254 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.254 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.254 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.255 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj>]
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.255 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T22:45:39.254615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.255 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.255 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.255 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.255 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.255 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.255 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T22:45:39.255592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.256 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.256 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.256 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.256 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.256 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.256 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.257 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.257 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T22:45:39.256897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.257 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.257 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.257 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.258 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.258 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T22:45:39.258064) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.318 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 1552175425 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.318 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 10486324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.318 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.381 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 1982773015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.382 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 11972381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.382 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.382 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.383 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T22:45:39.383249) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.383 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes volume: 1878 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.384 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.384 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.384 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.384 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 218 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T22:45:39.384436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.384 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.385 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.385 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.385 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.385 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.386 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.386 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T22:45:39.386607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.387 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.387 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.387 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.388 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T22:45:39.388010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.409 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/cpu volume: 34500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.429 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/cpu volume: 34130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.431 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.431 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.431 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.431 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T22:45:39.431275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.432 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.432 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/memory.usage volume: 49.72265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.433 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.434 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T22:45:39.432733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.435 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T22:45:39.434924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.436 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T22:45:39.436464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.457 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.458 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.458 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.478 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.478 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.479 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.480 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.480 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.480 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645490>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.480 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645490>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.480 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T22:45:39.480462) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.481 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.481 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.481 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.482 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.482 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.482 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.483 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.483 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.483 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.483 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.483 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.483 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 557006689 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.484 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 98708783 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T22:45:39.483490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.484 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 82244967 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.484 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 603707572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.485 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 113814738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.485 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 101138361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.486 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.486 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.486 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.486 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T22:45:39.486514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.486 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.487 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.487 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.487 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.487 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.488 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.489 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T22:45:39.489286) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.489 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.489 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes.delta volume: 1788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.490 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.490 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.490 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.490 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.491 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.491 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T22:45:39.490883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.491 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.492 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.492 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.492 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.492 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.493 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.493 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.493 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.493 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.494 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.494 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T22:45:39.492843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.495 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.495 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.495 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.495 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 41742336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.496 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.496 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.497 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.497 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T22:45:39.495774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.497 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.498 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.499 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T22:45:39.499008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.499 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.500 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.500 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.500 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.501 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T22:45:39.500792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.502 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.503 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.503 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes.delta volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.504 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T22:45:39.503170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.505 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.505 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T22:45:39.505094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.506 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.507 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.507 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.507 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.508 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.508 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.508 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.509 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T22:45:39.506822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.510 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.510 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj>]
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T22:45:39.510292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.511 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.512 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:45:39.513 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:45:39 compute-0 nova_compute[185650]: 2026-01-27 22:45:39.929 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:41 compute-0 podman[239668]: 2026-01-27 22:45:41.393214412 +0000 UTC m=+0.095696342 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, vcs-type=git, vendor=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 27 22:45:42 compute-0 nova_compute[185650]: 2026-01-27 22:45:42.536 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:44 compute-0 nova_compute[185650]: 2026-01-27 22:45:44.932 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:44 compute-0 nova_compute[185650]: 2026-01-27 22:45:44.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.022 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.023 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.023 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.024 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.094 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.157 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.158 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.216 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.218 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.275 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.276 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.334 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.340 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.400 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.401 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.458 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.460 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.522 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.524 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.587 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.942 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.943 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5090MB free_disk=72.40045928955078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.944 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:45:45 compute-0 nova_compute[185650]: 2026-01-27 22:45:45.944 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:45:46 compute-0 nova_compute[185650]: 2026-01-27 22:45:46.007 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:45:46 compute-0 nova_compute[185650]: 2026-01-27 22:45:46.007 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance d2c3fc6f-7629-469b-be68-8fe07acabe0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:45:46 compute-0 nova_compute[185650]: 2026-01-27 22:45:46.008 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:45:46 compute-0 nova_compute[185650]: 2026-01-27 22:45:46.008 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:45:46 compute-0 nova_compute[185650]: 2026-01-27 22:45:46.133 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:45:46 compute-0 nova_compute[185650]: 2026-01-27 22:45:46.147 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:45:46 compute-0 nova_compute[185650]: 2026-01-27 22:45:46.149 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:45:46 compute-0 nova_compute[185650]: 2026-01-27 22:45:46.149 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:45:47 compute-0 nova_compute[185650]: 2026-01-27 22:45:47.539 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:48 compute-0 nova_compute[185650]: 2026-01-27 22:45:48.149 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:45:48 compute-0 nova_compute[185650]: 2026-01-27 22:45:48.149 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:45:48 compute-0 nova_compute[185650]: 2026-01-27 22:45:48.360 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:45:48 compute-0 nova_compute[185650]: 2026-01-27 22:45:48.361 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:45:48 compute-0 nova_compute[185650]: 2026-01-27 22:45:48.361 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:45:49 compute-0 nova_compute[185650]: 2026-01-27 22:45:49.933 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:50 compute-0 podman[239712]: 2026-01-27 22:45:50.437582145 +0000 UTC m=+0.140373356 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 22:45:51 compute-0 nova_compute[185650]: 2026-01-27 22:45:51.127 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updating instance_info_cache with network_info: [{"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:45:51 compute-0 nova_compute[185650]: 2026-01-27 22:45:51.145 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:45:51 compute-0 nova_compute[185650]: 2026-01-27 22:45:51.146 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:45:51 compute-0 nova_compute[185650]: 2026-01-27 22:45:51.147 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:45:51 compute-0 nova_compute[185650]: 2026-01-27 22:45:51.147 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:45:51 compute-0 nova_compute[185650]: 2026-01-27 22:45:51.147 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:45:51 compute-0 nova_compute[185650]: 2026-01-27 22:45:51.148 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:45:51 compute-0 nova_compute[185650]: 2026-01-27 22:45:51.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:45:51 compute-0 nova_compute[185650]: 2026-01-27 22:45:51.995 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:45:52 compute-0 podman[239729]: 2026-01-27 22:45:52.43384786 +0000 UTC m=+0.119776100 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 22:45:52 compute-0 nova_compute[185650]: 2026-01-27 22:45:52.544 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:53 compute-0 nova_compute[185650]: 2026-01-27 22:45:53.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:45:53 compute-0 nova_compute[185650]: 2026-01-27 22:45:53.995 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:45:54 compute-0 nova_compute[185650]: 2026-01-27 22:45:54.936 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:55 compute-0 podman[239750]: 2026-01-27 22:45:55.366929192 +0000 UTC m=+0.071658836 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:45:57 compute-0 podman[239774]: 2026-01-27 22:45:57.414303137 +0000 UTC m=+0.104205358 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 27 22:45:57 compute-0 nova_compute[185650]: 2026-01-27 22:45:57.547 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:45:59 compute-0 podman[201529]: time="2026-01-27T22:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:45:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:45:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4349 "" "Go-http-client/1.1"
Jan 27 22:45:59 compute-0 nova_compute[185650]: 2026-01-27 22:45:59.938 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:00 compute-0 podman[239796]: 2026-01-27 22:46:00.37335818 +0000 UTC m=+0.069352110 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, build-date=2024-09-18T21:23:30, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-container, release=1214.1726694543)
Jan 27 22:46:00 compute-0 podman[239797]: 2026-01-27 22:46:00.430530856 +0000 UTC m=+0.121683635 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 22:46:01 compute-0 openstack_network_exporter[204648]: ERROR   22:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:46:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:46:01 compute-0 openstack_network_exporter[204648]: ERROR   22:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:46:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:46:02 compute-0 nova_compute[185650]: 2026-01-27 22:46:02.549 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:46:04.133 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:46:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:46:04.134 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:46:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:46:04.134 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:46:04 compute-0 nova_compute[185650]: 2026-01-27 22:46:04.940 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:07 compute-0 nova_compute[185650]: 2026-01-27 22:46:07.551 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:09 compute-0 podman[239843]: 2026-01-27 22:46:09.366287935 +0000 UTC m=+0.069559956 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 22:46:09 compute-0 nova_compute[185650]: 2026-01-27 22:46:09.947 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:12 compute-0 podman[239867]: 2026-01-27 22:46:12.435681905 +0000 UTC m=+0.130469340 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, architecture=x86_64)
Jan 27 22:46:12 compute-0 nova_compute[185650]: 2026-01-27 22:46:12.553 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:14 compute-0 nova_compute[185650]: 2026-01-27 22:46:14.948 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:17 compute-0 nova_compute[185650]: 2026-01-27 22:46:17.555 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:19 compute-0 nova_compute[185650]: 2026-01-27 22:46:19.952 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:21 compute-0 podman[239888]: 2026-01-27 22:46:21.366174996 +0000 UTC m=+0.073007854 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 22:46:22 compute-0 nova_compute[185650]: 2026-01-27 22:46:22.557 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:23 compute-0 podman[239905]: 2026-01-27 22:46:23.363278287 +0000 UTC m=+0.066234102 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:46:24 compute-0 nova_compute[185650]: 2026-01-27 22:46:24.953 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:26 compute-0 podman[239924]: 2026-01-27 22:46:26.390861462 +0000 UTC m=+0.077803892 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:46:27 compute-0 nova_compute[185650]: 2026-01-27 22:46:27.560 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:28 compute-0 podman[239948]: 2026-01-27 22:46:28.38163886 +0000 UTC m=+0.085290900 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 27 22:46:29 compute-0 podman[201529]: time="2026-01-27T22:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:46:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:46:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4345 "" "Go-http-client/1.1"
Jan 27 22:46:29 compute-0 nova_compute[185650]: 2026-01-27 22:46:29.955 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:31 compute-0 podman[239969]: 2026-01-27 22:46:31.396985429 +0000 UTC m=+0.094064698 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, architecture=x86_64, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container)
Jan 27 22:46:31 compute-0 openstack_network_exporter[204648]: ERROR   22:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:46:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:46:31 compute-0 openstack_network_exporter[204648]: ERROR   22:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:46:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:46:31 compute-0 podman[239970]: 2026-01-27 22:46:31.480422507 +0000 UTC m=+0.159853689 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 22:46:32 compute-0 nova_compute[185650]: 2026-01-27 22:46:32.562 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:34 compute-0 nova_compute[185650]: 2026-01-27 22:46:34.956 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:37 compute-0 nova_compute[185650]: 2026-01-27 22:46:37.565 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:39 compute-0 nova_compute[185650]: 2026-01-27 22:46:39.961 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:40 compute-0 podman[240015]: 2026-01-27 22:46:40.384852848 +0000 UTC m=+0.080202087 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 22:46:42 compute-0 nova_compute[185650]: 2026-01-27 22:46:42.566 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:43 compute-0 podman[240039]: 2026-01-27 22:46:43.410334235 +0000 UTC m=+0.108436181 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 22:46:44 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 22:46:44 compute-0 nova_compute[185650]: 2026-01-27 22:46:44.964 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:45 compute-0 nova_compute[185650]: 2026-01-27 22:46:45.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.017 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.017 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.018 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.018 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.084 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.139 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.140 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.193 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.194 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.248 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.249 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.302 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.309 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.367 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.369 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.424 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.425 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.480 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.481 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.538 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.881 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.883 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5084MB free_disk=72.40045928955078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.883 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.883 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.968 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.969 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance d2c3fc6f-7629-469b-be68-8fe07acabe0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.969 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:46:46 compute-0 nova_compute[185650]: 2026-01-27 22:46:46.969 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:46:47 compute-0 nova_compute[185650]: 2026-01-27 22:46:47.029 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:46:47 compute-0 nova_compute[185650]: 2026-01-27 22:46:47.044 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:46:47 compute-0 nova_compute[185650]: 2026-01-27 22:46:47.045 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:46:47 compute-0 nova_compute[185650]: 2026-01-27 22:46:47.045 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:46:47 compute-0 nova_compute[185650]: 2026-01-27 22:46:47.568 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:48 compute-0 nova_compute[185650]: 2026-01-27 22:46:48.046 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:46:48 compute-0 nova_compute[185650]: 2026-01-27 22:46:48.046 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:46:48 compute-0 nova_compute[185650]: 2026-01-27 22:46:48.047 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:46:49 compute-0 nova_compute[185650]: 2026-01-27 22:46:49.125 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:46:49 compute-0 nova_compute[185650]: 2026-01-27 22:46:49.126 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:46:49 compute-0 nova_compute[185650]: 2026-01-27 22:46:49.127 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:46:49 compute-0 nova_compute[185650]: 2026-01-27 22:46:49.128 185654 DEBUG nova.objects.instance [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:46:49 compute-0 nova_compute[185650]: 2026-01-27 22:46:49.967 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:51 compute-0 nova_compute[185650]: 2026-01-27 22:46:51.219 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:46:51 compute-0 nova_compute[185650]: 2026-01-27 22:46:51.236 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:46:51 compute-0 nova_compute[185650]: 2026-01-27 22:46:51.237 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:46:51 compute-0 nova_compute[185650]: 2026-01-27 22:46:51.237 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:46:51 compute-0 nova_compute[185650]: 2026-01-27 22:46:51.238 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:46:51 compute-0 nova_compute[185650]: 2026-01-27 22:46:51.238 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:46:51 compute-0 nova_compute[185650]: 2026-01-27 22:46:51.238 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:46:52 compute-0 podman[240086]: 2026-01-27 22:46:52.427242398 +0000 UTC m=+0.119707835 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 22:46:52 compute-0 nova_compute[185650]: 2026-01-27 22:46:52.571 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:53 compute-0 nova_compute[185650]: 2026-01-27 22:46:53.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:46:53 compute-0 nova_compute[185650]: 2026-01-27 22:46:53.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:46:54 compute-0 nova_compute[185650]: 2026-01-27 22:46:54.155 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:46:54 compute-0 nova_compute[185650]: 2026-01-27 22:46:54.155 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:46:54 compute-0 nova_compute[185650]: 2026-01-27 22:46:54.155 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:46:54 compute-0 podman[240106]: 2026-01-27 22:46:54.363561331 +0000 UTC m=+0.068940153 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:46:54 compute-0 nova_compute[185650]: 2026-01-27 22:46:54.971 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:57 compute-0 podman[240125]: 2026-01-27 22:46:57.433457058 +0000 UTC m=+0.118866327 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:46:57 compute-0 nova_compute[185650]: 2026-01-27 22:46:57.573 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:46:59 compute-0 podman[240149]: 2026-01-27 22:46:59.421644137 +0000 UTC m=+0.107737335 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 27 22:46:59 compute-0 podman[201529]: time="2026-01-27T22:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:46:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:46:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4350 "" "Go-http-client/1.1"
Jan 27 22:46:59 compute-0 nova_compute[185650]: 2026-01-27 22:46:59.975 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:01 compute-0 anacron[31078]: Job `cron.monthly' started
Jan 27 22:47:01 compute-0 anacron[31078]: Job `cron.monthly' terminated
Jan 27 22:47:01 compute-0 anacron[31078]: Normal exit (3 jobs run)
Jan 27 22:47:01 compute-0 openstack_network_exporter[204648]: ERROR   22:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:47:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:47:01 compute-0 openstack_network_exporter[204648]: ERROR   22:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:47:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:47:02 compute-0 podman[240172]: 2026-01-27 22:47:02.372347903 +0000 UTC m=+0.077134042 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, name=ubi9, release=1214.1726694543, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 22:47:02 compute-0 podman[240173]: 2026-01-27 22:47:02.411467694 +0000 UTC m=+0.111017974 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:47:02 compute-0 nova_compute[185650]: 2026-01-27 22:47:02.576 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:47:04.134 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:47:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:47:04.135 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:47:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:47:04.135 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:47:04 compute-0 nova_compute[185650]: 2026-01-27 22:47:04.975 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:05 compute-0 sshd-session[240216]: Received disconnect from 45.227.254.170 port 11974:11:  [preauth]
Jan 27 22:47:05 compute-0 sshd-session[240216]: Disconnected from authenticating user root 45.227.254.170 port 11974 [preauth]
Jan 27 22:47:07 compute-0 nova_compute[185650]: 2026-01-27 22:47:07.578 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:09 compute-0 nova_compute[185650]: 2026-01-27 22:47:09.977 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:11 compute-0 podman[240218]: 2026-01-27 22:47:11.401169821 +0000 UTC m=+0.098042145 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:47:12 compute-0 nova_compute[185650]: 2026-01-27 22:47:12.581 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:14 compute-0 podman[240242]: 2026-01-27 22:47:14.406122758 +0000 UTC m=+0.101317957 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vcs-type=git, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Jan 27 22:47:14 compute-0 nova_compute[185650]: 2026-01-27 22:47:14.979 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:17 compute-0 nova_compute[185650]: 2026-01-27 22:47:17.584 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:19 compute-0 nova_compute[185650]: 2026-01-27 22:47:19.983 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:22 compute-0 nova_compute[185650]: 2026-01-27 22:47:22.587 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:23 compute-0 podman[240262]: 2026-01-27 22:47:23.371312131 +0000 UTC m=+0.071291066 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 22:47:24 compute-0 nova_compute[185650]: 2026-01-27 22:47:24.987 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:25 compute-0 podman[240281]: 2026-01-27 22:47:25.07199998 +0000 UTC m=+0.061815559 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 27 22:47:27 compute-0 nova_compute[185650]: 2026-01-27 22:47:27.590 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:28 compute-0 podman[240299]: 2026-01-27 22:47:28.377879614 +0000 UTC m=+0.079203450 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:47:29 compute-0 podman[201529]: time="2026-01-27T22:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:47:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:47:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Jan 27 22:47:29 compute-0 nova_compute[185650]: 2026-01-27 22:47:29.991 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:30 compute-0 podman[240323]: 2026-01-27 22:47:30.413325119 +0000 UTC m=+0.102648864 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 27 22:47:31 compute-0 openstack_network_exporter[204648]: ERROR   22:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:47:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:47:31 compute-0 openstack_network_exporter[204648]: ERROR   22:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:47:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:47:32 compute-0 nova_compute[185650]: 2026-01-27 22:47:32.593 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:33 compute-0 podman[240343]: 2026-01-27 22:47:33.401966992 +0000 UTC m=+0.102643254 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 22:47:33 compute-0 podman[240344]: 2026-01-27 22:47:33.438846944 +0000 UTC m=+0.135196674 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 22:47:34 compute-0 nova_compute[185650]: 2026-01-27 22:47:34.994 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:37 compute-0 nova_compute[185650]: 2026-01-27 22:47:37.597 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.103 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.104 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b19aa20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.113 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd2c3fc6f-7629-469b-be68-8fe07acabe0f', 'name': 'vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.117 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '344c74c3-95d6-4f19-993f-b4a89c9d074b', 'name': 'test_0', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.118 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c646060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c646060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T22:47:38.119024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.125 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.130 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.132 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.132 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.133 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T22:47:38.133030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.133 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.134 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.135 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T22:47:38.136609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.137 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.137 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.138 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.138 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.139 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.139 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.140 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T22:47:38.139852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.249 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 1578835711 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.250 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 10486324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.251 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.340 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 1982773015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.340 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 11972381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.341 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.342 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.342 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.343 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T22:47:38.343158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.344 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes volume: 1878 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.345 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.345 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.346 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T22:47:38.346067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.347 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.347 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.348 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.348 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.349 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.350 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.351 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T22:47:38.351126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.352 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.353 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.353 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T22:47:38.354157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.390 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/cpu volume: 128890000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.421 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/cpu volume: 35480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.422 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.422 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.422 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.423 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.423 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.423 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T22:47:38.423480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.425 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.425 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T22:47:38.425761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.426 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/memory.usage volume: 49.17578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.426 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.427 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.428 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.428 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.428 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T22:47:38.428532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.429 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.430 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.430 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.430 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T22:47:38.430781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.466 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.467 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.468 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.502 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.503 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.503 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.505 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645490>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645490>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T22:47:38.505513) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.506 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.506 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.507 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.507 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.508 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.508 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.509 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.510 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T22:47:38.510567) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.511 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 560972745 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.511 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 98708783 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.512 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 82244967 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.512 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 603707572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.513 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 113814738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.513 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 101138361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.514 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.514 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.515 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.515 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.516 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.517 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.517 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.518 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.518 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.520 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T22:47:38.515381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.521 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.521 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.521 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.521 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.522 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.bytes.delta volume: 3363 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T22:47:38.521874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.523 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.524 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.525 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T22:47:38.524952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.525 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.525 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.526 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.527 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.528 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.528 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.529 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.530 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.530 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.531 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T22:47:38.527616) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.531 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.532 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.532 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.533 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.533 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.534 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 41893888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T22:47:38.533557) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.534 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.535 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.535 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.536 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.536 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.537 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.538 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.538 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T22:47:38.537909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.539 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.539 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.bytes volume: 4892 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T22:47:38.539636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.540 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.541 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.542 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.542 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.bytes.delta volume: 2986 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.542 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.543 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.543 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.544 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T22:47:38.541992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T22:47:38.544172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.544 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.545 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.545 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.546 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.546 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.546 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T22:47:38.546126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.546 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.546 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.547 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.547 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.547 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.553 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.553 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.553 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.553 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.553 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.553 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.554 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:47:38.554 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:47:39 compute-0 nova_compute[185650]: 2026-01-27 22:47:39.997 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:42 compute-0 podman[240387]: 2026-01-27 22:47:42.344175269 +0000 UTC m=+0.051157928 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 22:47:42 compute-0 nova_compute[185650]: 2026-01-27 22:47:42.600 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:44 compute-0 podman[240411]: 2026-01-27 22:47:44.740243363 +0000 UTC m=+0.072708847 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=openstack_network_exporter, maintainer=Red Hat, Inc.)
Jan 27 22:47:44 compute-0 nova_compute[185650]: 2026-01-27 22:47:44.999 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:46 compute-0 nova_compute[185650]: 2026-01-27 22:47:46.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:47:46 compute-0 nova_compute[185650]: 2026-01-27 22:47:46.996 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:47:47 compute-0 nova_compute[185650]: 2026-01-27 22:47:47.603 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:48 compute-0 nova_compute[185650]: 2026-01-27 22:47:48.148 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:47:48 compute-0 nova_compute[185650]: 2026-01-27 22:47:48.149 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:47:48 compute-0 nova_compute[185650]: 2026-01-27 22:47:48.150 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.002 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.920 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updating instance_info_cache with network_info: [{"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.935 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.936 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.937 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.937 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.938 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.938 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.958 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.959 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.959 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:47:50 compute-0 nova_compute[185650]: 2026-01-27 22:47:50.960 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.034 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.108 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.109 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.171 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.172 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.229 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.230 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.294 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.301 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.365 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.366 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.429 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.432 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.492 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.494 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.553 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.914 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.915 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5105MB free_disk=72.40045547485352GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.916 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.916 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.995 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.996 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance d2c3fc6f-7629-469b-be68-8fe07acabe0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.996 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:47:51 compute-0 nova_compute[185650]: 2026-01-27 22:47:51.997 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:47:52 compute-0 nova_compute[185650]: 2026-01-27 22:47:52.050 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:47:52 compute-0 nova_compute[185650]: 2026-01-27 22:47:52.066 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:47:52 compute-0 nova_compute[185650]: 2026-01-27 22:47:52.068 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:47:52 compute-0 nova_compute[185650]: 2026-01-27 22:47:52.068 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:47:52 compute-0 nova_compute[185650]: 2026-01-27 22:47:52.606 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:54 compute-0 nova_compute[185650]: 2026-01-27 22:47:54.125 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:47:54 compute-0 nova_compute[185650]: 2026-01-27 22:47:54.126 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:47:54 compute-0 nova_compute[185650]: 2026-01-27 22:47:54.127 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:47:54 compute-0 podman[240455]: 2026-01-27 22:47:54.397559544 +0000 UTC m=+0.104365882 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 27 22:47:54 compute-0 nova_compute[185650]: 2026-01-27 22:47:54.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:47:54 compute-0 nova_compute[185650]: 2026-01-27 22:47:54.996 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:47:55 compute-0 nova_compute[185650]: 2026-01-27 22:47:55.006 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:55 compute-0 podman[240474]: 2026-01-27 22:47:55.417272023 +0000 UTC m=+0.109972781 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:47:57 compute-0 nova_compute[185650]: 2026-01-27 22:47:57.610 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:47:59 compute-0 podman[240493]: 2026-01-27 22:47:59.374308623 +0000 UTC m=+0.073732966 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:47:59 compute-0 podman[201529]: time="2026-01-27T22:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:47:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:47:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Jan 27 22:48:00 compute-0 nova_compute[185650]: 2026-01-27 22:48:00.007 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:01 compute-0 podman[240517]: 2026-01-27 22:48:01.400673422 +0000 UTC m=+0.093993950 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 22:48:01 compute-0 openstack_network_exporter[204648]: ERROR   22:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:48:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:48:01 compute-0 openstack_network_exporter[204648]: ERROR   22:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:48:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:48:02 compute-0 nova_compute[185650]: 2026-01-27 22:48:02.613 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:04.135 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:04.137 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:04.138 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:04 compute-0 podman[240538]: 2026-01-27 22:48:04.428014758 +0000 UTC m=+0.107101190 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public)
Jan 27 22:48:04 compute-0 podman[240539]: 2026-01-27 22:48:04.48003401 +0000 UTC m=+0.161633413 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 22:48:05 compute-0 nova_compute[185650]: 2026-01-27 22:48:05.011 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:07 compute-0 nova_compute[185650]: 2026-01-27 22:48:07.617 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:10 compute-0 nova_compute[185650]: 2026-01-27 22:48:10.013 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:12 compute-0 nova_compute[185650]: 2026-01-27 22:48:12.621 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:13 compute-0 podman[240581]: 2026-01-27 22:48:13.446204975 +0000 UTC m=+0.129665369 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 22:48:14 compute-0 podman[240603]: 2026-01-27 22:48:14.917802303 +0000 UTC m=+0.129983137 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, config_id=openstack_network_exporter, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9)
Jan 27 22:48:15 compute-0 nova_compute[185650]: 2026-01-27 22:48:15.015 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:17 compute-0 nova_compute[185650]: 2026-01-27 22:48:17.624 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:20 compute-0 nova_compute[185650]: 2026-01-27 22:48:20.018 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:22 compute-0 nova_compute[185650]: 2026-01-27 22:48:22.628 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:25 compute-0 nova_compute[185650]: 2026-01-27 22:48:25.022 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:25 compute-0 podman[240624]: 2026-01-27 22:48:25.383851807 +0000 UTC m=+0.084635235 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260126)
Jan 27 22:48:26 compute-0 podman[240643]: 2026-01-27 22:48:26.390148826 +0000 UTC m=+0.091345744 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 22:48:27 compute-0 nova_compute[185650]: 2026-01-27 22:48:27.630 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:29 compute-0 podman[201529]: time="2026-01-27T22:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:48:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:48:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4345 "" "Go-http-client/1.1"
Jan 27 22:48:30 compute-0 nova_compute[185650]: 2026-01-27 22:48:30.025 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:30 compute-0 podman[240661]: 2026-01-27 22:48:30.389153168 +0000 UTC m=+0.084866872 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:48:30 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:30.515 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:41:f8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '26:ae:8e:b8:80:28'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:48:30 compute-0 nova_compute[185650]: 2026-01-27 22:48:30.517 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:30 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:30.518 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 22:48:31 compute-0 openstack_network_exporter[204648]: ERROR   22:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:48:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:48:31 compute-0 openstack_network_exporter[204648]: ERROR   22:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:48:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:48:32 compute-0 podman[240684]: 2026-01-27 22:48:32.386243902 +0000 UTC m=+0.071896185 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:48:32 compute-0 nova_compute[185650]: 2026-01-27 22:48:32.632 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:34 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:34.521 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:48:35 compute-0 nova_compute[185650]: 2026-01-27 22:48:35.027 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:35 compute-0 podman[240704]: 2026-01-27 22:48:35.404884382 +0000 UTC m=+0.097256087 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, architecture=x86_64, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.component=ubi9-container)
Jan 27 22:48:35 compute-0 podman[240705]: 2026-01-27 22:48:35.455672687 +0000 UTC m=+0.131789790 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:48:35 compute-0 nova_compute[185650]: 2026-01-27 22:48:35.964 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "dd624b81-38f5-46aa-881b-ca66ace64fd3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:35 compute-0 nova_compute[185650]: 2026-01-27 22:48:35.965 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:35 compute-0 nova_compute[185650]: 2026-01-27 22:48:35.988 185654 DEBUG nova.compute.manager [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.071 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.072 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.084 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.084 185654 INFO nova.compute.claims [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Claim successful on node compute-0.ctlplane.example.com
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.233 185654 DEBUG nova.compute.provider_tree [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.249 185654 DEBUG nova.scheduler.client.report [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.266 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.267 185654 DEBUG nova.compute.manager [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.307 185654 DEBUG nova.compute.manager [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.308 185654 DEBUG nova.network.neutron [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.329 185654 INFO nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.363 185654 DEBUG nova.compute.manager [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.475 185654 DEBUG nova.compute.manager [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.477 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.478 185654 INFO nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Creating image(s)
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.479 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "/var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.479 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.481 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.500 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.566 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.567 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5c90c71330689347f3144a95195c41f3e929b39e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.568 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5c90c71330689347f3144a95195c41f3e929b39e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.579 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.643 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.644 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e,backing_fmt=raw /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.691 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e,backing_fmt=raw /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.692 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5c90c71330689347f3144a95195c41f3e929b39e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.692 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.750 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.752 185654 DEBUG nova.virt.disk.api [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Checking if we can resize image /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.752 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.808 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.810 185654 DEBUG nova.virt.disk.api [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Cannot resize image /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.811 185654 DEBUG nova.objects.instance [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'migration_context' on Instance uuid dd624b81-38f5-46aa-881b-ca66ace64fd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.828 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "/var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.829 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.832 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.859 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.919 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.921 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.922 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.935 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.995 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:36 compute-0 nova_compute[185650]: 2026-01-27 22:48:36.997 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:37 compute-0 nova_compute[185650]: 2026-01-27 22:48:37.041 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:37 compute-0 nova_compute[185650]: 2026-01-27 22:48:37.042 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:37 compute-0 nova_compute[185650]: 2026-01-27 22:48:37.043 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:37 compute-0 nova_compute[185650]: 2026-01-27 22:48:37.112 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:37 compute-0 nova_compute[185650]: 2026-01-27 22:48:37.113 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 22:48:37 compute-0 nova_compute[185650]: 2026-01-27 22:48:37.113 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Ensure instance console log exists: /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 22:48:37 compute-0 nova_compute[185650]: 2026-01-27 22:48:37.114 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:37 compute-0 nova_compute[185650]: 2026-01-27 22:48:37.114 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:37 compute-0 nova_compute[185650]: 2026-01-27 22:48:37.115 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:37 compute-0 nova_compute[185650]: 2026-01-27 22:48:37.634 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:40 compute-0 nova_compute[185650]: 2026-01-27 22:48:40.030 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:40 compute-0 nova_compute[185650]: 2026-01-27 22:48:40.540 185654 DEBUG nova.network.neutron [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Successfully updated port: ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 22:48:40 compute-0 nova_compute[185650]: 2026-01-27 22:48:40.555 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:48:40 compute-0 nova_compute[185650]: 2026-01-27 22:48:40.556 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquired lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:48:40 compute-0 nova_compute[185650]: 2026-01-27 22:48:40.557 185654 DEBUG nova.network.neutron [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 22:48:40 compute-0 nova_compute[185650]: 2026-01-27 22:48:40.640 185654 DEBUG nova.compute.manager [req-8a3e38ca-507e-45c7-b741-a34b08710a53 req-7befb168-5215-4a73-b794-15cf879bcd3c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Received event network-changed-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:48:40 compute-0 nova_compute[185650]: 2026-01-27 22:48:40.641 185654 DEBUG nova.compute.manager [req-8a3e38ca-507e-45c7-b741-a34b08710a53 req-7befb168-5215-4a73-b794-15cf879bcd3c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Refreshing instance network info cache due to event network-changed-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 22:48:40 compute-0 nova_compute[185650]: 2026-01-27 22:48:40.642 185654 DEBUG oslo_concurrency.lockutils [req-8a3e38ca-507e-45c7-b741-a34b08710a53 req-7befb168-5215-4a73-b794-15cf879bcd3c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:48:40 compute-0 nova_compute[185650]: 2026-01-27 22:48:40.710 185654 DEBUG nova.network.neutron [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.435 185654 DEBUG nova.network.neutron [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Updating instance_info_cache with network_info: [{"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.455 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Releasing lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.455 185654 DEBUG nova.compute.manager [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Instance network_info: |[{"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.455 185654 DEBUG oslo_concurrency.lockutils [req-8a3e38ca-507e-45c7-b741-a34b08710a53 req-7befb168-5215-4a73-b794-15cf879bcd3c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.456 185654 DEBUG nova.network.neutron [req-8a3e38ca-507e-45c7-b741-a34b08710a53 req-7befb168-5215-4a73-b794-15cf879bcd3c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Refreshing network info cache for port ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.459 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Start _get_guest_xml network_info=[{"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T22:42:20Z,direct_url=<?>,disk_format='qcow2',id=7e803ca7-2382-4e5a-95f7-55acaa154415,min_disk=0,min_ram=0,name='cirros',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T22:42:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}], 'ephemerals': [{'size': 1, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.467 185654 WARNING nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.473 185654 DEBUG nova.virt.libvirt.host [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.474 185654 DEBUG nova.virt.libvirt.host [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.478 185654 DEBUG nova.virt.libvirt.host [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.479 185654 DEBUG nova.virt.libvirt.host [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.479 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.479 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T22:42:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T22:42:20Z,direct_url=<?>,disk_format='qcow2',id=7e803ca7-2382-4e5a-95f7-55acaa154415,min_disk=0,min_ram=0,name='cirros',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T22:42:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.480 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.480 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.481 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.481 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.481 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.482 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.482 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.482 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.483 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.483 185654 DEBUG nova.virt.hardware [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.486 185654 DEBUG nova.virt.libvirt.vif [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T22:48:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g',id=3,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='3b67098f-eb50-41e2-8c8a-348367561673'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-x9j1qa3e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T22:48:36Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTIwMzEyMTMyNTM3NjQ3MzgzNDQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjAzMTIxMzI1Mzc2NDczODM0ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTIwMzEyMTMyNTM3NjQ3MzgzNDQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 27 22:48:41 compute-0 nova_compute[185650]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjAzMTIxMzI1Mzc2NDczODM0ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTIwMzEyMTMyNTM3NjQ3MzgzNDQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0tLQo=',user_id='7387204f74504e288ed7a5dee73f5083',uuid=dd624b81-38f5-46aa-881b-ca66ace64fd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.486 185654 DEBUG nova.network.os_vif_util [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.487 185654 DEBUG nova.network.os_vif_util [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:77:d7,bridge_name='br-int',has_traffic_filtering=True,id=ba4dd39b-aafe-4664-a6e5-0f4eed30dc40,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba4dd39b-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.488 185654 DEBUG nova.objects.instance [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd624b81-38f5-46aa-881b-ca66ace64fd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.501 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] End _get_guest_xml xml=<domain type="kvm">
Jan 27 22:48:41 compute-0 nova_compute[185650]:   <uuid>dd624b81-38f5-46aa-881b-ca66ace64fd3</uuid>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   <name>instance-00000003</name>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   <memory>524288</memory>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   <vcpu>1</vcpu>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   <metadata>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <nova:name>vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g</nova:name>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <nova:creationTime>2026-01-27 22:48:41</nova:creationTime>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <nova:flavor name="m1.small">
Jan 27 22:48:41 compute-0 nova_compute[185650]:         <nova:memory>512</nova:memory>
Jan 27 22:48:41 compute-0 nova_compute[185650]:         <nova:disk>1</nova:disk>
Jan 27 22:48:41 compute-0 nova_compute[185650]:         <nova:swap>0</nova:swap>
Jan 27 22:48:41 compute-0 nova_compute[185650]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 22:48:41 compute-0 nova_compute[185650]:         <nova:vcpus>1</nova:vcpus>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       </nova:flavor>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <nova:owner>
Jan 27 22:48:41 compute-0 nova_compute[185650]:         <nova:user uuid="7387204f74504e288ed7a5dee73f5083">admin</nova:user>
Jan 27 22:48:41 compute-0 nova_compute[185650]:         <nova:project uuid="8318d5a200d74e4386cf4972db015b75">admin</nova:project>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       </nova:owner>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <nova:root type="image" uuid="7e803ca7-2382-4e5a-95f7-55acaa154415"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <nova:ports>
Jan 27 22:48:41 compute-0 nova_compute[185650]:         <nova:port uuid="ba4dd39b-aafe-4664-a6e5-0f4eed30dc40">
Jan 27 22:48:41 compute-0 nova_compute[185650]:           <nova:ip type="fixed" address="192.168.0.223" ipVersion="4"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:         </nova:port>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       </nova:ports>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     </nova:instance>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   </metadata>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   <sysinfo type="smbios">
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <system>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <entry name="manufacturer">RDO</entry>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <entry name="product">OpenStack Compute</entry>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <entry name="serial">dd624b81-38f5-46aa-881b-ca66ace64fd3</entry>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <entry name="uuid">dd624b81-38f5-46aa-881b-ca66ace64fd3</entry>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <entry name="family">Virtual Machine</entry>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     </system>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   </sysinfo>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   <os>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <boot dev="hd"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <smbios mode="sysinfo"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   </os>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   <features>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <acpi/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <apic/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <vmcoreinfo/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   </features>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   <clock offset="utc">
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <timer name="hpet" present="no"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   </clock>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   <cpu mode="host-model" match="exact">
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   </cpu>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   <devices>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <target dev="vda" bus="virtio"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <target dev="vdb" bus="virtio"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <disk type="file" device="cdrom">
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.config"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <target dev="sda" bus="sata"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <interface type="ethernet">
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <mac address="fa:16:3e:54:77:d7"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <mtu size="1442"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <target dev="tapba4dd39b-aa"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     </interface>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <serial type="pty">
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <log file="/var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/console.log" append="off"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     </serial>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <video>
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     </video>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <input type="tablet" bus="usb"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <rng model="virtio">
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <backend model="random">/dev/urandom</backend>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     </rng>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <controller type="usb" index="0"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     <memballoon model="virtio">
Jan 27 22:48:41 compute-0 nova_compute[185650]:       <stats period="10"/>
Jan 27 22:48:41 compute-0 nova_compute[185650]:     </memballoon>
Jan 27 22:48:41 compute-0 nova_compute[185650]:   </devices>
Jan 27 22:48:41 compute-0 nova_compute[185650]: </domain>
Jan 27 22:48:41 compute-0 nova_compute[185650]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.503 185654 DEBUG nova.compute.manager [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Preparing to wait for external event network-vif-plugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.503 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.503 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.504 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.505 185654 DEBUG nova.virt.libvirt.vif [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T22:48:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g',id=3,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='3b67098f-eb50-41e2-8c8a-348367561673'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-x9j1qa3e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T22:48:36Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTIwMzEyMTMyNTM3NjQ3MzgzNDQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjAzMTIxMzI1Mzc2NDczODM0ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTIwMzEyMTMyNTM3NjQ3MzgzNDQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 27 22:48:41 compute-0 nova_compute[185650]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjAzMTIxMzI1Mzc2NDczODM0ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTIwMzEyMTMyNTM3NjQ3MzgzNDQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0tLQo=',user_id='7387204f74504e288ed7a5dee73f5083',uuid=dd624b81-38f5-46aa-881b-ca66ace64fd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.505 185654 DEBUG nova.network.os_vif_util [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.506 185654 DEBUG nova.network.os_vif_util [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:77:d7,bridge_name='br-int',has_traffic_filtering=True,id=ba4dd39b-aafe-4664-a6e5-0f4eed30dc40,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba4dd39b-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.506 185654 DEBUG os_vif [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:77:d7,bridge_name='br-int',has_traffic_filtering=True,id=ba4dd39b-aafe-4664-a6e5-0f4eed30dc40,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba4dd39b-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.507 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.507 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.507 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.511 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.511 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba4dd39b-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.512 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba4dd39b-aa, col_values=(('external_ids', {'iface-id': 'ba4dd39b-aafe-4664-a6e5-0f4eed30dc40', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:77:d7', 'vm-uuid': 'dd624b81-38f5-46aa-881b-ca66ace64fd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.514 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:41 compute-0 NetworkManager[56600]: <info>  [1769554121.5157] manager: (tapba4dd39b-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.515 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.526 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.527 185654 INFO os_vif [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:77:d7,bridge_name='br-int',has_traffic_filtering=True,id=ba4dd39b-aafe-4664-a6e5-0f4eed30dc40,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba4dd39b-aa')
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.581 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.581 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.581 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.582 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No VIF found with MAC fa:16:3e:54:77:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 22:48:41 compute-0 nova_compute[185650]: 2026-01-27 22:48:41.582 185654 INFO nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Using config drive
Jan 27 22:48:41 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 22:48:41.486 185654 DEBUG nova.virt.libvirt.vif [None req-c653e232-ce [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 22:48:41 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 22:48:41.505 185654 DEBUG nova.virt.libvirt.vif [None req-c653e232-ce [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.381 185654 INFO nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Creating config drive at /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.config
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.389 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9jhmp5c5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.523 185654 DEBUG oslo_concurrency.processutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9jhmp5c5" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:42 compute-0 kernel: tapba4dd39b-aa: entered promiscuous mode
Jan 27 22:48:42 compute-0 NetworkManager[56600]: <info>  [1769554122.6135] manager: (tapba4dd39b-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 27 22:48:42 compute-0 ovn_controller[98048]: 2026-01-27T22:48:42Z|00040|binding|INFO|Claiming lport ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 for this chassis.
Jan 27 22:48:42 compute-0 ovn_controller[98048]: 2026-01-27T22:48:42Z|00041|binding|INFO|ba4dd39b-aafe-4664-a6e5-0f4eed30dc40: Claiming fa:16:3e:54:77:d7 192.168.0.223
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.615 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.619 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:77:d7 192.168.0.223'], port_security=['fa:16:3e:54:77:d7 192.168.0.223'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e3ismbxiivp3-2npykxfceygn-qfpmbakkd4ep-port-kyn5svl6qrpu', 'neutron:cidrs': '192.168.0.223/24', 'neutron:device_id': 'dd624b81-38f5-46aa-881b-ca66ace64fd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98f694e3-becc-413f-b42b-35a7171f7f96', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e3ismbxiivp3-2npykxfceygn-qfpmbakkd4ep-port-kyn5svl6qrpu', 'neutron:project_id': '8318d5a200d74e4386cf4972db015b75', 'neutron:revision_number': '2', 'neutron:security_group_ids': '597f1057-390b-408a-b8d0-705fb45de27b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d21d3e2-2f64-49c8-bca6-9efc66f5bd67, chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=ba4dd39b-aafe-4664-a6e5-0f4eed30dc40) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.620 107302 INFO neutron.agent.ovn.metadata.agent [-] Port ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 in datapath 98f694e3-becc-413f-b42b-35a7171f7f96 bound to our chassis
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.621 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98f694e3-becc-413f-b42b-35a7171f7f96
Jan 27 22:48:42 compute-0 ovn_controller[98048]: 2026-01-27T22:48:42Z|00042|binding|INFO|Setting lport ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 ovn-installed in OVS
Jan 27 22:48:42 compute-0 ovn_controller[98048]: 2026-01-27T22:48:42Z|00043|binding|INFO|Setting lport ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 up in Southbound
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.635 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.642 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.652 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[b839d99b-7140-455b-98de-7880ca862edb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:48:42 compute-0 systemd-machined[157036]: New machine qemu-3-instance-00000003.
Jan 27 22:48:42 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.695 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3c7c51-68fe-48e6-aa1d-a924542b57c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.700 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[df91c726-5b25-48b6-8ec4-e1de7f5eac18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:48:42 compute-0 systemd-udevd[240802]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 22:48:42 compute-0 NetworkManager[56600]: <info>  [1769554122.7239] device (tapba4dd39b-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 22:48:42 compute-0 NetworkManager[56600]: <info>  [1769554122.7288] device (tapba4dd39b-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.741 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[ea915bac-ae2d-44ad-a9f8-5ee671ca196b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.757 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[17ba5bb3-c87e-4171-92b1-8b7b3816ae80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98f694e3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:25:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365000, 'reachable_time': 21737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240811, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.771 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1a2960-9465-4758-bb67-e0a75f093eec]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365013, 'tstamp': 365013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240813, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365017, 'tstamp': 365017}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240813, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.773 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f694e3-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.774 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.775 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.776 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98f694e3-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.777 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.778 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98f694e3-b0, col_values=(('external_ids', {'iface-id': 'acacffcb-4de9-40c5-aeef-3e5766b557e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:48:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:48:42.779 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.949 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769554122.948857, dd624b81-38f5-46aa-881b-ca66ace64fd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.950 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] VM Started (Lifecycle Event)
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.968 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.973 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769554122.9498081, dd624b81-38f5-46aa-881b-ca66ace64fd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.973 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] VM Paused (Lifecycle Event)
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.988 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:48:42 compute-0 nova_compute[185650]: 2026-01-27 22:48:42.992 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.008 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.393 185654 DEBUG nova.compute.manager [req-3c6faee4-64fa-46f6-a7a9-ba0868309d08 req-3602dfef-c254-498e-9a07-6c24495c830d b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Received event network-vif-plugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.393 185654 DEBUG oslo_concurrency.lockutils [req-3c6faee4-64fa-46f6-a7a9-ba0868309d08 req-3602dfef-c254-498e-9a07-6c24495c830d b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.394 185654 DEBUG oslo_concurrency.lockutils [req-3c6faee4-64fa-46f6-a7a9-ba0868309d08 req-3602dfef-c254-498e-9a07-6c24495c830d b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.394 185654 DEBUG oslo_concurrency.lockutils [req-3c6faee4-64fa-46f6-a7a9-ba0868309d08 req-3602dfef-c254-498e-9a07-6c24495c830d b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.394 185654 DEBUG nova.compute.manager [req-3c6faee4-64fa-46f6-a7a9-ba0868309d08 req-3602dfef-c254-498e-9a07-6c24495c830d b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Processing event network-vif-plugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.395 185654 DEBUG nova.compute.manager [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.399 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769554123.3986268, dd624b81-38f5-46aa-881b-ca66ace64fd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.399 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] VM Resumed (Lifecycle Event)
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.401 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.406 185654 INFO nova.virt.libvirt.driver [-] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Instance spawned successfully.
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.407 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.421 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.429 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.432 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.433 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.433 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.433 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.434 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.434 185654 DEBUG nova.virt.libvirt.driver [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.461 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.494 185654 INFO nova.compute.manager [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Took 7.02 seconds to spawn the instance on the hypervisor.
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.495 185654 DEBUG nova.compute.manager [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.556 185654 INFO nova.compute.manager [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Took 7.52 seconds to build instance.
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.574 185654 DEBUG oslo_concurrency.lockutils [None req-c653e232-ce7a-4270-9644-4626e2fe6054 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.593 185654 DEBUG nova.network.neutron [req-8a3e38ca-507e-45c7-b741-a34b08710a53 req-7befb168-5215-4a73-b794-15cf879bcd3c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Updated VIF entry in instance network info cache for port ba4dd39b-aafe-4664-a6e5-0f4eed30dc40. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.594 185654 DEBUG nova.network.neutron [req-8a3e38ca-507e-45c7-b741-a34b08710a53 req-7befb168-5215-4a73-b794-15cf879bcd3c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Updating instance_info_cache with network_info: [{"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:48:43 compute-0 nova_compute[185650]: 2026-01-27 22:48:43.607 185654 DEBUG oslo_concurrency.lockutils [req-8a3e38ca-507e-45c7-b741-a34b08710a53 req-7befb168-5215-4a73-b794-15cf879bcd3c b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:48:43 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 22:48:44 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 22:48:44 compute-0 podman[240821]: 2026-01-27 22:48:44.10148944 +0000 UTC m=+0.091649326 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 22:48:45 compute-0 nova_compute[185650]: 2026-01-27 22:48:45.034 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:45 compute-0 podman[240863]: 2026-01-27 22:48:45.373128479 +0000 UTC m=+0.073663050 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, config_id=openstack_network_exporter, name=ubi9-minimal)
Jan 27 22:48:45 compute-0 nova_compute[185650]: 2026-01-27 22:48:45.463 185654 DEBUG nova.compute.manager [req-e70e668f-5d0a-4138-9ca7-2f54cfc5a433 req-a60548fa-8c91-48de-a64d-0e925ab2b581 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Received event network-vif-plugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:48:45 compute-0 nova_compute[185650]: 2026-01-27 22:48:45.463 185654 DEBUG oslo_concurrency.lockutils [req-e70e668f-5d0a-4138-9ca7-2f54cfc5a433 req-a60548fa-8c91-48de-a64d-0e925ab2b581 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:45 compute-0 nova_compute[185650]: 2026-01-27 22:48:45.464 185654 DEBUG oslo_concurrency.lockutils [req-e70e668f-5d0a-4138-9ca7-2f54cfc5a433 req-a60548fa-8c91-48de-a64d-0e925ab2b581 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:45 compute-0 nova_compute[185650]: 2026-01-27 22:48:45.464 185654 DEBUG oslo_concurrency.lockutils [req-e70e668f-5d0a-4138-9ca7-2f54cfc5a433 req-a60548fa-8c91-48de-a64d-0e925ab2b581 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:45 compute-0 nova_compute[185650]: 2026-01-27 22:48:45.464 185654 DEBUG nova.compute.manager [req-e70e668f-5d0a-4138-9ca7-2f54cfc5a433 req-a60548fa-8c91-48de-a64d-0e925ab2b581 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] No waiting events found dispatching network-vif-plugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 22:48:45 compute-0 nova_compute[185650]: 2026-01-27 22:48:45.465 185654 WARNING nova.compute.manager [req-e70e668f-5d0a-4138-9ca7-2f54cfc5a433 req-a60548fa-8c91-48de-a64d-0e925ab2b581 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Received unexpected event network-vif-plugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 for instance with vm_state active and task_state None.
Jan 27 22:48:46 compute-0 nova_compute[185650]: 2026-01-27 22:48:46.515 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:48 compute-0 nova_compute[185650]: 2026-01-27 22:48:48.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:48:48 compute-0 nova_compute[185650]: 2026-01-27 22:48:48.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:48:48 compute-0 nova_compute[185650]: 2026-01-27 22:48:48.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:48:49 compute-0 nova_compute[185650]: 2026-01-27 22:48:49.209 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:48:49 compute-0 nova_compute[185650]: 2026-01-27 22:48:49.210 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:48:49 compute-0 nova_compute[185650]: 2026-01-27 22:48:49.210 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:48:49 compute-0 nova_compute[185650]: 2026-01-27 22:48:49.210 185654 DEBUG nova.objects.instance [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:48:50 compute-0 nova_compute[185650]: 2026-01-27 22:48:50.035 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.410 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.438 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.438 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.439 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.440 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.473 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.474 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.474 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.474 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.518 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.579 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.647 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.651 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.730 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.732 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.798 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.801 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.877 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.889 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.964 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:51 compute-0 nova_compute[185650]: 2026-01-27 22:48:51.966 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.047 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.049 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.104 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.106 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.164 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.171 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.231 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.233 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.291 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.294 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.356 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.357 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.417 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.774 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.776 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4915MB free_disk=72.39941787719727GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.776 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.777 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.871 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.872 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance d2c3fc6f-7629-469b-be68-8fe07acabe0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.872 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance dd624b81-38f5-46aa-881b-ca66ace64fd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.872 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.873 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.886 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing inventories for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.900 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating ProviderTree inventory for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.900 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating inventory in ProviderTree for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.915 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing aggregate associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 22:48:52 compute-0 nova_compute[185650]: 2026-01-27 22:48:52.937 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing trait associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_AVX,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 22:48:53 compute-0 nova_compute[185650]: 2026-01-27 22:48:53.003 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:48:53 compute-0 nova_compute[185650]: 2026-01-27 22:48:53.019 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:48:53 compute-0 nova_compute[185650]: 2026-01-27 22:48:53.043 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:48:53 compute-0 nova_compute[185650]: 2026-01-27 22:48:53.044 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:48:53 compute-0 nova_compute[185650]: 2026-01-27 22:48:53.598 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:48:53 compute-0 nova_compute[185650]: 2026-01-27 22:48:53.728 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:48:53 compute-0 nova_compute[185650]: 2026-01-27 22:48:53.729 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:48:53 compute-0 nova_compute[185650]: 2026-01-27 22:48:53.730 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:48:54 compute-0 nova_compute[185650]: 2026-01-27 22:48:54.119 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:48:55 compute-0 nova_compute[185650]: 2026-01-27 22:48:55.153 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:55 compute-0 nova_compute[185650]: 2026-01-27 22:48:55.154 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:48:55 compute-0 nova_compute[185650]: 2026-01-27 22:48:55.155 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:48:56 compute-0 podman[240921]: 2026-01-27 22:48:56.387186034 +0000 UTC m=+0.084630123 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:48:56 compute-0 nova_compute[185650]: 2026-01-27 22:48:56.523 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:48:56 compute-0 nova_compute[185650]: 2026-01-27 22:48:56.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:48:57 compute-0 podman[240941]: 2026-01-27 22:48:57.404945625 +0000 UTC m=+0.096986142 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Jan 27 22:48:59 compute-0 podman[201529]: time="2026-01-27T22:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:48:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:48:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Jan 27 22:49:00 compute-0 nova_compute[185650]: 2026-01-27 22:49:00.041 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:01 compute-0 podman[240959]: 2026-01-27 22:49:01.360540169 +0000 UTC m=+0.063184104 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:49:01 compute-0 openstack_network_exporter[204648]: ERROR   22:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:49:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:49:01 compute-0 openstack_network_exporter[204648]: ERROR   22:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:49:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:49:01 compute-0 nova_compute[185650]: 2026-01-27 22:49:01.525 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:03 compute-0 podman[240983]: 2026-01-27 22:49:03.405223881 +0000 UTC m=+0.108568199 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 22:49:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:49:04.137 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:49:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:49:04.137 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:49:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:49:04.138 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:49:05 compute-0 nova_compute[185650]: 2026-01-27 22:49:05.044 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:06 compute-0 podman[241004]: 2026-01-27 22:49:06.427781149 +0000 UTC m=+0.120796625 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, name=ubi9, version=9.4, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1214.1726694543)
Jan 27 22:49:06 compute-0 podman[241005]: 2026-01-27 22:49:06.439051038 +0000 UTC m=+0.133627046 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 22:49:06 compute-0 nova_compute[185650]: 2026-01-27 22:49:06.526 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:10 compute-0 nova_compute[185650]: 2026-01-27 22:49:10.047 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:11 compute-0 nova_compute[185650]: 2026-01-27 22:49:11.529 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:12 compute-0 ovn_controller[98048]: 2026-01-27T22:49:12Z|00044|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Jan 27 22:49:13 compute-0 ovn_controller[98048]: 2026-01-27T22:49:13Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:77:d7 192.168.0.223
Jan 27 22:49:13 compute-0 ovn_controller[98048]: 2026-01-27T22:49:13Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:77:d7 192.168.0.223
Jan 27 22:49:14 compute-0 podman[241061]: 2026-01-27 22:49:14.409887035 +0000 UTC m=+0.105783873 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:49:15 compute-0 nova_compute[185650]: 2026-01-27 22:49:15.049 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:16 compute-0 podman[241085]: 2026-01-27 22:49:16.389640716 +0000 UTC m=+0.089032863 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, managed_by=edpm_ansible)
Jan 27 22:49:16 compute-0 nova_compute[185650]: 2026-01-27 22:49:16.532 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:20 compute-0 nova_compute[185650]: 2026-01-27 22:49:20.051 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:21 compute-0 nova_compute[185650]: 2026-01-27 22:49:21.536 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:25 compute-0 nova_compute[185650]: 2026-01-27 22:49:25.053 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:26 compute-0 nova_compute[185650]: 2026-01-27 22:49:26.540 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:27 compute-0 podman[241107]: 2026-01-27 22:49:27.376297874 +0000 UTC m=+0.083191962 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2)
Jan 27 22:49:28 compute-0 podman[241125]: 2026-01-27 22:49:28.369873851 +0000 UTC m=+0.077211019 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 27 22:49:29 compute-0 podman[201529]: time="2026-01-27T22:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:49:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:49:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4367 "" "Go-http-client/1.1"
Jan 27 22:49:30 compute-0 nova_compute[185650]: 2026-01-27 22:49:30.056 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:31 compute-0 openstack_network_exporter[204648]: ERROR   22:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:49:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:49:31 compute-0 openstack_network_exporter[204648]: ERROR   22:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:49:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:49:31 compute-0 nova_compute[185650]: 2026-01-27 22:49:31.543 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:32 compute-0 podman[241144]: 2026-01-27 22:49:32.375126629 +0000 UTC m=+0.067789652 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:49:34 compute-0 podman[241168]: 2026-01-27 22:49:34.410088271 +0000 UTC m=+0.104441524 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:49:35 compute-0 nova_compute[185650]: 2026-01-27 22:49:35.058 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:36 compute-0 nova_compute[185650]: 2026-01-27 22:49:36.545 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:37 compute-0 podman[241188]: 2026-01-27 22:49:37.375806313 +0000 UTC m=+0.073854653 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 22:49:37 compute-0 podman[241189]: 2026-01-27 22:49:37.415614383 +0000 UTC m=+0.108557531 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.104 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.104 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.112 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd2c3fc6f-7629-469b-be68-8fe07acabe0f', 'name': 'vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.118 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '344c74c3-95d6-4f19-993f-b4a89c9d074b', 'name': 'test_0', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.119 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.122 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance dd624b81-38f5-46aa-881b-ca66ace64fd3 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.123 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.124 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/dd624b81-38f5-46aa-881b-ca66ace64fd3 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}154da27a0715c4500fb4356c9136f029f6352e657551e62d11427d8299e729cc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.124 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.125 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.126 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.488 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 27 Jan 2026 22:49:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-badcd6d7-33ea-403e-a95d-c223307435c5 x-openstack-request-id: req-badcd6d7-33ea-403e-a95d-c223307435c5 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.488 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "dd624b81-38f5-46aa-881b-ca66ace64fd3", "name": "vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g", "status": "ACTIVE", "tenant_id": "8318d5a200d74e4386cf4972db015b75", "user_id": "7387204f74504e288ed7a5dee73f5083", "metadata": {"metering.server_group": "3b67098f-eb50-41e2-8c8a-348367561673"}, "hostId": "6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea", "image": {"id": "7e803ca7-2382-4e5a-95f7-55acaa154415", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7e803ca7-2382-4e5a-95f7-55acaa154415"}]}, "flavor": {"id": "c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093"}]}, "created": "2026-01-27T22:48:34Z", "updated": "2026-01-27T22:48:43Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.223", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:54:77:d7"}, {"version": 4, "addr": "192.168.122.201", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:54:77:d7"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/dd624b81-38f5-46aa-881b-ca66ace64fd3"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/dd624b81-38f5-46aa-881b-ca66ace64fd3"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T22:48:43.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.488 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/dd624b81-38f5-46aa-881b-ca66ace64fd3 used request id req-badcd6d7-33ea-403e-a95d-c223307435c5 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.490 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dd624b81-38f5-46aa-881b-ca66ace64fd3', 'name': 'vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.491 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c646060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c646060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.497 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.503 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T22:49:38.491278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.508 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for dd624b81-38f5-46aa-881b-ca66ace64fd3 / tapba4dd39b-aa inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.508 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.509 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.509 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T22:49:38.509386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.510 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g>]
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.510 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.510 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.511 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.511 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T22:49:38.510584) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.512 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T22:49:38.512442) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.512 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.513 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.513 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.514 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T22:49:38.514218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.600 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 1578835711 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.601 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 10486324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.601 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.662 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 1982773015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.663 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 11972381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.663 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.723 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 1865013944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.723 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 12652797 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.724 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.724 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.724 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.725 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.725 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes volume: 1962 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.725 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T22:49:38.725062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.726 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.727 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.727 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.727 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.728 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T22:49:38.727063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.728 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.728 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.728 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.729 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.729 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.729 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.730 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.730 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.730 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T22:49:38.730137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.731 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T22:49:38.731634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.750 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/cpu volume: 248930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.781 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/cpu volume: 36890000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.805 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/cpu volume: 30040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.806 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.806 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.807 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.808 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T22:49:38.807153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.809 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.809 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.809 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/memory.usage volume: 49.17578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.810 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/memory.usage volume: 48.765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.810 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/memory.usage volume: 49.69140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.811 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T22:49:38.809609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.812 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.812 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.813 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.813 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.813 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.813 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.814 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.814 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.814 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T22:49:38.812467) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.814 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T22:49:38.814255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.842 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.842 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.843 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.867 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.868 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.868 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.889 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.889 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.890 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.890 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.890 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.891 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645490>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.891 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645490>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.891 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.891 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.891 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T22:49:38.891140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.891 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.892 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.892 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.892 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.893 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.893 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.893 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.893 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.894 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.895 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 560972745 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.895 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 98708783 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.895 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 82244967 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T22:49:38.895192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.896 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 603707572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.896 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 113814738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.896 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 101138361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.897 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 587344116 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.897 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 100532473 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.897 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 196826454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.898 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.898 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T22:49:38.898309) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.898 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.899 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.899 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.899 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.899 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.900 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.900 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.900 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.901 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.901 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.901 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.901 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.902 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T22:49:38.901532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.904 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.905 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.905 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.905 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.905 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.906 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T22:49:38.904245) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.906 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.906 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.906 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.906 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.907 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T22:49:38.906824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.907 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.907 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.907 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.907 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.908 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.908 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.908 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.909 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.909 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.910 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.910 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 41893888 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.910 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.911 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.911 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.911 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.912 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T22:49:38.910514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.912 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.912 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.913 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.914 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.914 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.914 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T22:49:38.914443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.915 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.915 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.915 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.916 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.916 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T22:49:38.916376) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.916 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.bytes volume: 4962 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.917 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.917 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.bytes volume: 1991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.917 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.917 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.917 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.918 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T22:49:38.918136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.918 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.918 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.919 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.919 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.919 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets volume: 44 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.920 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.920 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.920 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.920 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T22:49:38.919664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.921 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.921 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.921 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.921 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.921 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T22:49:38.921270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.921 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.922 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.922 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.922 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.922 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.922 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.922 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.923 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.923 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.923 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.923 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.923 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.923 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.924 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.924 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.924 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g>]
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T22:49:38.923984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:49:38.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:49:40 compute-0 nova_compute[185650]: 2026-01-27 22:49:40.060 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:41 compute-0 nova_compute[185650]: 2026-01-27 22:49:41.547 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:43 compute-0 nova_compute[185650]: 2026-01-27 22:49:43.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:44 compute-0 podman[241232]: 2026-01-27 22:49:44.748727537 +0000 UTC m=+0.075858744 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:49:45 compute-0 nova_compute[185650]: 2026-01-27 22:49:45.004 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:45 compute-0 nova_compute[185650]: 2026-01-27 22:49:45.005 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 22:49:45 compute-0 nova_compute[185650]: 2026-01-27 22:49:45.019 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 22:49:45 compute-0 nova_compute[185650]: 2026-01-27 22:49:45.062 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:46 compute-0 nova_compute[185650]: 2026-01-27 22:49:46.550 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:47 compute-0 podman[241256]: 2026-01-27 22:49:47.40493844 +0000 UTC m=+0.111165268 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7)
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.008 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.009 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.044 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.045 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.046 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.047 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.065 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.153 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.237 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.238 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.318 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.320 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.387 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.389 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.454 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.464 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.535 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.537 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.602 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.605 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.679 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.681 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.743 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.754 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.822 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.826 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.906 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:50 compute-0 nova_compute[185650]: 2026-01-27 22:49:50.910 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.017 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.019 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.084 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.444 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.445 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4815MB free_disk=72.37852478027344GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.446 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.446 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.555 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.569 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.569 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance d2c3fc6f-7629-469b-be68-8fe07acabe0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.570 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance dd624b81-38f5-46aa-881b-ca66ace64fd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.570 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.570 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.679 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.697 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.699 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:49:51 compute-0 nova_compute[185650]: 2026-01-27 22:49:51.700 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:49:52 compute-0 nova_compute[185650]: 2026-01-27 22:49:52.685 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:52 compute-0 nova_compute[185650]: 2026-01-27 22:49:52.685 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:49:53 compute-0 nova_compute[185650]: 2026-01-27 22:49:53.281 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:49:53 compute-0 nova_compute[185650]: 2026-01-27 22:49:53.281 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:49:53 compute-0 nova_compute[185650]: 2026-01-27 22:49:53.282 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:49:54 compute-0 nova_compute[185650]: 2026-01-27 22:49:54.793 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updating instance_info_cache with network_info: [{"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:49:54 compute-0 nova_compute[185650]: 2026-01-27 22:49:54.823 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:49:54 compute-0 nova_compute[185650]: 2026-01-27 22:49:54.824 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:49:54 compute-0 nova_compute[185650]: 2026-01-27 22:49:54.825 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:54 compute-0 nova_compute[185650]: 2026-01-27 22:49:54.826 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:54 compute-0 nova_compute[185650]: 2026-01-27 22:49:54.827 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:49:54 compute-0 nova_compute[185650]: 2026-01-27 22:49:54.828 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:54 compute-0 nova_compute[185650]: 2026-01-27 22:49:54.829 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 22:49:55 compute-0 nova_compute[185650]: 2026-01-27 22:49:55.008 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:55 compute-0 nova_compute[185650]: 2026-01-27 22:49:55.009 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:55 compute-0 nova_compute[185650]: 2026-01-27 22:49:55.009 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:55 compute-0 nova_compute[185650]: 2026-01-27 22:49:55.067 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:56 compute-0 nova_compute[185650]: 2026-01-27 22:49:56.559 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:49:57 compute-0 nova_compute[185650]: 2026-01-27 22:49:57.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:49:58 compute-0 podman[241315]: 2026-01-27 22:49:58.394644253 +0000 UTC m=+0.091040467 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 22:49:58 compute-0 podman[241333]: 2026-01-27 22:49:58.528755113 +0000 UTC m=+0.099752492 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 22:49:59 compute-0 podman[201529]: time="2026-01-27T22:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:49:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:49:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4366 "" "Go-http-client/1.1"
Jan 27 22:50:00 compute-0 nova_compute[185650]: 2026-01-27 22:50:00.069 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:01 compute-0 openstack_network_exporter[204648]: ERROR   22:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:50:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:50:01 compute-0 openstack_network_exporter[204648]: ERROR   22:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:50:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:50:01 compute-0 nova_compute[185650]: 2026-01-27 22:50:01.561 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:03 compute-0 podman[241352]: 2026-01-27 22:50:03.377379307 +0000 UTC m=+0.084248461 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:50:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:04.138 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:04.139 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:04.140 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:05 compute-0 nova_compute[185650]: 2026-01-27 22:50:05.073 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:05 compute-0 podman[241376]: 2026-01-27 22:50:05.388814716 +0000 UTC m=+0.084838376 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:50:06 compute-0 nova_compute[185650]: 2026-01-27 22:50:06.565 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:08 compute-0 podman[241396]: 2026-01-27 22:50:08.414525161 +0000 UTC m=+0.116532727 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, architecture=x86_64, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 22:50:08 compute-0 podman[241397]: 2026-01-27 22:50:08.456804615 +0000 UTC m=+0.149520881 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 27 22:50:10 compute-0 nova_compute[185650]: 2026-01-27 22:50:10.076 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:11 compute-0 nova_compute[185650]: 2026-01-27 22:50:11.568 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:15 compute-0 nova_compute[185650]: 2026-01-27 22:50:15.078 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:15 compute-0 podman[241438]: 2026-01-27 22:50:15.402532194 +0000 UTC m=+0.084907608 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 22:50:16 compute-0 nova_compute[185650]: 2026-01-27 22:50:16.572 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:18 compute-0 podman[241462]: 2026-01-27 22:50:18.390962974 +0000 UTC m=+0.095049371 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, vcs-type=git, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 27 22:50:20 compute-0 nova_compute[185650]: 2026-01-27 22:50:20.080 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:21 compute-0 nova_compute[185650]: 2026-01-27 22:50:21.575 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:25 compute-0 nova_compute[185650]: 2026-01-27 22:50:25.082 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:26 compute-0 nova_compute[185650]: 2026-01-27 22:50:26.577 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:29 compute-0 podman[241485]: 2026-01-27 22:50:29.400218513 +0000 UTC m=+0.099888986 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 22:50:29 compute-0 podman[241486]: 2026-01-27 22:50:29.416737141 +0000 UTC m=+0.113217902 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Jan 27 22:50:29 compute-0 podman[201529]: time="2026-01-27T22:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:50:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:50:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4369 "" "Go-http-client/1.1"
Jan 27 22:50:30 compute-0 nova_compute[185650]: 2026-01-27 22:50:30.085 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:31 compute-0 openstack_network_exporter[204648]: ERROR   22:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:50:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:50:31 compute-0 openstack_network_exporter[204648]: ERROR   22:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:50:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:50:31 compute-0 nova_compute[185650]: 2026-01-27 22:50:31.579 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:33 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:33.570 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:41:f8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '26:ae:8e:b8:80:28'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:50:33 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:33.571 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 22:50:33 compute-0 nova_compute[185650]: 2026-01-27 22:50:33.571 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:34 compute-0 podman[241520]: 2026-01-27 22:50:34.383113971 +0000 UTC m=+0.086536599 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:50:35 compute-0 nova_compute[185650]: 2026-01-27 22:50:35.087 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:36 compute-0 podman[241544]: 2026-01-27 22:50:36.366814943 +0000 UTC m=+0.069785787 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi)
Jan 27 22:50:36 compute-0 nova_compute[185650]: 2026-01-27 22:50:36.584 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.014 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5409358c-78dc-4761-841a-7f453c6209fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.014 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.030 185654 DEBUG nova.compute.manager [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.103 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.103 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.112 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.112 185654 INFO nova.compute.claims [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Claim successful on node compute-0.ctlplane.example.com
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.280 185654 DEBUG nova.compute.provider_tree [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.296 185654 DEBUG nova.scheduler.client.report [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.325 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.326 185654 DEBUG nova.compute.manager [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.379 185654 DEBUG nova.compute.manager [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.380 185654 DEBUG nova.network.neutron [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.403 185654 INFO nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.445 185654 DEBUG nova.compute.manager [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.530 185654 DEBUG nova.compute.manager [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.532 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.533 185654 INFO nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Creating image(s)
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.534 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "/var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.534 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.535 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.553 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.614 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.615 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5c90c71330689347f3144a95195c41f3e929b39e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.616 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5c90c71330689347f3144a95195c41f3e929b39e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.632 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.692 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.693 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e,backing_fmt=raw /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.743 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e,backing_fmt=raw /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.744 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5c90c71330689347f3144a95195c41f3e929b39e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.745 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.798 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c90c71330689347f3144a95195c41f3e929b39e --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.799 185654 DEBUG nova.virt.disk.api [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Checking if we can resize image /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.800 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.856 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.858 185654 DEBUG nova.virt.disk.api [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Cannot resize image /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.858 185654 DEBUG nova.objects.instance [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'migration_context' on Instance uuid 5409358c-78dc-4761-841a-7f453c6209fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.876 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "/var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.877 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.878 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.897 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.953 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.954 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.955 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:38 compute-0 nova_compute[185650]: 2026-01-27 22:50:38.972 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:39 compute-0 nova_compute[185650]: 2026-01-27 22:50:39.027 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:39 compute-0 nova_compute[185650]: 2026-01-27 22:50:39.029 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:39 compute-0 nova_compute[185650]: 2026-01-27 22:50:39.095 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 1073741824" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:39 compute-0 nova_compute[185650]: 2026-01-27 22:50:39.097 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:39 compute-0 nova_compute[185650]: 2026-01-27 22:50:39.098 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:39 compute-0 nova_compute[185650]: 2026-01-27 22:50:39.153 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:39 compute-0 nova_compute[185650]: 2026-01-27 22:50:39.155 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 22:50:39 compute-0 nova_compute[185650]: 2026-01-27 22:50:39.156 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Ensure instance console log exists: /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 22:50:39 compute-0 nova_compute[185650]: 2026-01-27 22:50:39.158 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:39 compute-0 nova_compute[185650]: 2026-01-27 22:50:39.159 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:39 compute-0 nova_compute[185650]: 2026-01-27 22:50:39.160 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:39 compute-0 podman[241590]: 2026-01-27 22:50:39.407453561 +0000 UTC m=+0.097558441 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.29.0, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 22:50:39 compute-0 podman[241591]: 2026-01-27 22:50:39.452205617 +0000 UTC m=+0.135360070 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 22:50:40 compute-0 nova_compute[185650]: 2026-01-27 22:50:40.090 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:41 compute-0 nova_compute[185650]: 2026-01-27 22:50:41.148 185654 DEBUG nova.network.neutron [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Successfully updated port: ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 22:50:41 compute-0 nova_compute[185650]: 2026-01-27 22:50:41.168 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:50:41 compute-0 nova_compute[185650]: 2026-01-27 22:50:41.169 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquired lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:50:41 compute-0 nova_compute[185650]: 2026-01-27 22:50:41.170 185654 DEBUG nova.network.neutron [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 22:50:41 compute-0 nova_compute[185650]: 2026-01-27 22:50:41.254 185654 DEBUG nova.compute.manager [req-7317afaf-20b5-403f-8298-8ec58fb92283 req-49679b10-176b-4580-b3c5-5a16fc880d7a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Received event network-changed-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:50:41 compute-0 nova_compute[185650]: 2026-01-27 22:50:41.254 185654 DEBUG nova.compute.manager [req-7317afaf-20b5-403f-8298-8ec58fb92283 req-49679b10-176b-4580-b3c5-5a16fc880d7a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Refreshing instance network info cache due to event network-changed-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 22:50:41 compute-0 nova_compute[185650]: 2026-01-27 22:50:41.255 185654 DEBUG oslo_concurrency.lockutils [req-7317afaf-20b5-403f-8298-8ec58fb92283 req-49679b10-176b-4580-b3c5-5a16fc880d7a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:50:41 compute-0 nova_compute[185650]: 2026-01-27 22:50:41.295 185654 DEBUG nova.network.neutron [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 22:50:41 compute-0 nova_compute[185650]: 2026-01-27 22:50:41.588 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.450 185654 DEBUG nova.network.neutron [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Updating instance_info_cache with network_info: [{"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.466 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Releasing lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.467 185654 DEBUG nova.compute.manager [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Instance network_info: |[{"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.468 185654 DEBUG oslo_concurrency.lockutils [req-7317afaf-20b5-403f-8298-8ec58fb92283 req-49679b10-176b-4580-b3c5-5a16fc880d7a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.469 185654 DEBUG nova.network.neutron [req-7317afaf-20b5-403f-8298-8ec58fb92283 req-49679b10-176b-4580-b3c5-5a16fc880d7a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Refreshing network info cache for port ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.473 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Start _get_guest_xml network_info=[{"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T22:42:20Z,direct_url=<?>,disk_format='qcow2',id=7e803ca7-2382-4e5a-95f7-55acaa154415,min_disk=0,min_ram=0,name='cirros',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T22:42:22Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}], 'ephemerals': [{'size': 1, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.481 185654 WARNING nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.494 185654 DEBUG nova.virt.libvirt.host [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.495 185654 DEBUG nova.virt.libvirt.host [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.500 185654 DEBUG nova.virt.libvirt.host [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.501 185654 DEBUG nova.virt.libvirt.host [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.502 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.503 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T22:42:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T22:42:20Z,direct_url=<?>,disk_format='qcow2',id=7e803ca7-2382-4e5a-95f7-55acaa154415,min_disk=0,min_ram=0,name='cirros',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T22:42:22Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.504 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.504 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.505 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.507 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.508 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.509 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.509 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.510 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.511 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.512 185654 DEBUG nova.virt.hardware [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.519 185654 DEBUG nova.virt.libvirt.vif [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T22:50:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem',id=4,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='3b67098f-eb50-41e2-8c8a-348367561673'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-hvzumw9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T22:50:38Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEwMDIwOTc2NTkzMjc1NjI5OTE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTAwMjA5NzY1OTMyNzU2Mjk5MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEwMDIwOTc2NTkzMjc1NjI5OTE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 27 22:50:43 compute-0 nova_compute[185650]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTAwMjA5NzY1OTMyNzU2Mjk5MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEwMDIwOTc2NTkzMjc1NjI5OTE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0tLQo=',user_id='7387204f74504e288ed7a5dee73f5083',uuid=5409358c-78dc-4761-841a-7f453c6209fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.520 185654 DEBUG nova.network.os_vif_util [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.521 185654 DEBUG nova.network.os_vif_util [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:dc:a3,bridge_name='br-int',has_traffic_filtering=True,id=ccfe58e9-3ff7-4073-9f9f-c8e641661ba0,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapccfe58e9-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.522 185654 DEBUG nova.objects.instance [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5409358c-78dc-4761-841a-7f453c6209fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.536 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] End _get_guest_xml xml=<domain type="kvm">
Jan 27 22:50:43 compute-0 nova_compute[185650]:   <uuid>5409358c-78dc-4761-841a-7f453c6209fb</uuid>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   <name>instance-00000004</name>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   <memory>524288</memory>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   <vcpu>1</vcpu>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   <metadata>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <nova:name>vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem</nova:name>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <nova:creationTime>2026-01-27 22:50:43</nova:creationTime>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <nova:flavor name="m1.small">
Jan 27 22:50:43 compute-0 nova_compute[185650]:         <nova:memory>512</nova:memory>
Jan 27 22:50:43 compute-0 nova_compute[185650]:         <nova:disk>1</nova:disk>
Jan 27 22:50:43 compute-0 nova_compute[185650]:         <nova:swap>0</nova:swap>
Jan 27 22:50:43 compute-0 nova_compute[185650]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 22:50:43 compute-0 nova_compute[185650]:         <nova:vcpus>1</nova:vcpus>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       </nova:flavor>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <nova:owner>
Jan 27 22:50:43 compute-0 nova_compute[185650]:         <nova:user uuid="7387204f74504e288ed7a5dee73f5083">admin</nova:user>
Jan 27 22:50:43 compute-0 nova_compute[185650]:         <nova:project uuid="8318d5a200d74e4386cf4972db015b75">admin</nova:project>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       </nova:owner>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <nova:root type="image" uuid="7e803ca7-2382-4e5a-95f7-55acaa154415"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <nova:ports>
Jan 27 22:50:43 compute-0 nova_compute[185650]:         <nova:port uuid="ccfe58e9-3ff7-4073-9f9f-c8e641661ba0">
Jan 27 22:50:43 compute-0 nova_compute[185650]:           <nova:ip type="fixed" address="192.168.0.99" ipVersion="4"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:         </nova:port>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       </nova:ports>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     </nova:instance>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   </metadata>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   <sysinfo type="smbios">
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <system>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <entry name="manufacturer">RDO</entry>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <entry name="product">OpenStack Compute</entry>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <entry name="serial">5409358c-78dc-4761-841a-7f453c6209fb</entry>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <entry name="uuid">5409358c-78dc-4761-841a-7f453c6209fb</entry>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <entry name="family">Virtual Machine</entry>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     </system>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   </sysinfo>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   <os>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <boot dev="hd"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <smbios mode="sysinfo"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   </os>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   <features>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <acpi/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <apic/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <vmcoreinfo/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   </features>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   <clock offset="utc">
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <timer name="hpet" present="no"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   </clock>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   <cpu mode="host-model" match="exact">
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   </cpu>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   <devices>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <target dev="vda" bus="virtio"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <target dev="vdb" bus="virtio"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <disk type="file" device="cdrom">
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.config"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <target dev="sda" bus="sata"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <interface type="ethernet">
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <mac address="fa:16:3e:17:dc:a3"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <mtu size="1442"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <target dev="tapccfe58e9-3f"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     </interface>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <serial type="pty">
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <log file="/var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/console.log" append="off"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     </serial>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <video>
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     </video>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <input type="tablet" bus="usb"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <rng model="virtio">
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <backend model="random">/dev/urandom</backend>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     </rng>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <controller type="usb" index="0"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     <memballoon model="virtio">
Jan 27 22:50:43 compute-0 nova_compute[185650]:       <stats period="10"/>
Jan 27 22:50:43 compute-0 nova_compute[185650]:     </memballoon>
Jan 27 22:50:43 compute-0 nova_compute[185650]:   </devices>
Jan 27 22:50:43 compute-0 nova_compute[185650]: </domain>
Jan 27 22:50:43 compute-0 nova_compute[185650]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.544 185654 DEBUG nova.compute.manager [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Preparing to wait for external event network-vif-plugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.545 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5409358c-78dc-4761-841a-7f453c6209fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.545 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.546 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.546 185654 DEBUG nova.virt.libvirt.vif [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T22:50:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem',id=4,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='3b67098f-eb50-41e2-8c8a-348367561673'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-hvzumw9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T22:50:38Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEwMDIwOTc2NTkzMjc1NjI5OTE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTAwMjA5NzY1OTMyNzU2Mjk5MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEwMDIwOTc2NTkzMjc1NjI5OTE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 27 22:50:43 compute-0 nova_compute[185650]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTAwMjA5NzY1OTMyNzU2Mjk5MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEwMDIwOTc2NTkzMjc1NjI5OTE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0tLQo=',user_id='7387204f74504e288ed7a5dee73f5083',uuid=5409358c-78dc-4761-841a-7f453c6209fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.547 185654 DEBUG nova.network.os_vif_util [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.548 185654 DEBUG nova.network.os_vif_util [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:dc:a3,bridge_name='br-int',has_traffic_filtering=True,id=ccfe58e9-3ff7-4073-9f9f-c8e641661ba0,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapccfe58e9-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.548 185654 DEBUG os_vif [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:dc:a3,bridge_name='br-int',has_traffic_filtering=True,id=ccfe58e9-3ff7-4073-9f9f-c8e641661ba0,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapccfe58e9-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.549 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.549 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.550 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.554 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.554 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccfe58e9-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.555 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapccfe58e9-3f, col_values=(('external_ids', {'iface-id': 'ccfe58e9-3ff7-4073-9f9f-c8e641661ba0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:dc:a3', 'vm-uuid': '5409358c-78dc-4761-841a-7f453c6209fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.557 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:43 compute-0 NetworkManager[56600]: <info>  [1769554243.5582] manager: (tapccfe58e9-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.558 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.565 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.566 185654 INFO os_vif [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:dc:a3,bridge_name='br-int',has_traffic_filtering=True,id=ccfe58e9-3ff7-4073-9f9f-c8e641661ba0,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapccfe58e9-3f')
Jan 27 22:50:43 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:43.577 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.617 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.617 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.618 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.619 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No VIF found with MAC fa:16:3e:17:dc:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 22:50:43 compute-0 nova_compute[185650]: 2026-01-27 22:50:43.620 185654 INFO nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Using config drive
Jan 27 22:50:43 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 22:50:43.519 185654 DEBUG nova.virt.libvirt.vif [None req-29316b19-e1 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 22:50:43 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 22:50:43.546 185654 DEBUG nova.virt.libvirt.vif [None req-29316b19-e1 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 22:50:44 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 22:50:44 compute-0 nova_compute[185650]: 2026-01-27 22:50:44.560 185654 INFO nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Creating config drive at /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.config
Jan 27 22:50:44 compute-0 nova_compute[185650]: 2026-01-27 22:50:44.568 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp86jgnzsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:44 compute-0 nova_compute[185650]: 2026-01-27 22:50:44.708 185654 DEBUG oslo_concurrency.processutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp86jgnzsi" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:44 compute-0 kernel: tapccfe58e9-3f: entered promiscuous mode
Jan 27 22:50:44 compute-0 NetworkManager[56600]: <info>  [1769554244.7900] manager: (tapccfe58e9-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 27 22:50:44 compute-0 nova_compute[185650]: 2026-01-27 22:50:44.789 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:44 compute-0 ovn_controller[98048]: 2026-01-27T22:50:44Z|00045|binding|INFO|Claiming lport ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 for this chassis.
Jan 27 22:50:44 compute-0 ovn_controller[98048]: 2026-01-27T22:50:44Z|00046|binding|INFO|ccfe58e9-3ff7-4073-9f9f-c8e641661ba0: Claiming fa:16:3e:17:dc:a3 192.168.0.99
Jan 27 22:50:44 compute-0 nova_compute[185650]: 2026-01-27 22:50:44.797 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:44 compute-0 nova_compute[185650]: 2026-01-27 22:50:44.812 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:44 compute-0 ovn_controller[98048]: 2026-01-27T22:50:44Z|00047|binding|INFO|Setting lport ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 ovn-installed in OVS
Jan 27 22:50:44 compute-0 nova_compute[185650]: 2026-01-27 22:50:44.815 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.814 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:dc:a3 192.168.0.99'], port_security=['fa:16:3e:17:dc:a3 192.168.0.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e3ismbxiivp3-je4u2ztq4ixb-joz7rt6vemeh-port-xhiell7bdepe', 'neutron:cidrs': '192.168.0.99/24', 'neutron:device_id': '5409358c-78dc-4761-841a-7f453c6209fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98f694e3-becc-413f-b42b-35a7171f7f96', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e3ismbxiivp3-je4u2ztq4ixb-joz7rt6vemeh-port-xhiell7bdepe', 'neutron:project_id': '8318d5a200d74e4386cf4972db015b75', 'neutron:revision_number': '2', 'neutron:security_group_ids': '597f1057-390b-408a-b8d0-705fb45de27b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d21d3e2-2f64-49c8-bca6-9efc66f5bd67, chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=ccfe58e9-3ff7-4073-9f9f-c8e641661ba0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:50:44 compute-0 ovn_controller[98048]: 2026-01-27T22:50:44Z|00048|binding|INFO|Setting lport ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 up in Southbound
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.818 107302 INFO neutron.agent.ovn.metadata.agent [-] Port ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 in datapath 98f694e3-becc-413f-b42b-35a7171f7f96 bound to our chassis
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.820 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98f694e3-becc-413f-b42b-35a7171f7f96
Jan 27 22:50:44 compute-0 systemd-udevd[241655]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 22:50:44 compute-0 systemd-machined[157036]: New machine qemu-4-instance-00000004.
Jan 27 22:50:44 compute-0 NetworkManager[56600]: <info>  [1769554244.8578] device (tapccfe58e9-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 22:50:44 compute-0 NetworkManager[56600]: <info>  [1769554244.8583] device (tapccfe58e9-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.859 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[9634dbff-190f-40ed-aa52-62fe1c5a049a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:50:44 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.894 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fcc00c-3fa6-40f2-934c-6865fd89a5f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.902 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[49fb04ec-76ad-4f39-9bcb-d5b146ff90aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.933 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[ed53ad51-fd2d-4f81-96fb-ce2518285b4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.949 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[4e4cdc95-ca0c-48e5-9ed5-d571733e02a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98f694e3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:25:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365000, 'reachable_time': 29787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241668, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.966 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[044452ad-788b-411b-ba4f-dc8479e7ef69]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365013, 'tstamp': 365013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241669, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365017, 'tstamp': 365017}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241669, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.968 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f694e3-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:50:44 compute-0 nova_compute[185650]: 2026-01-27 22:50:44.970 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.971 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98f694e3-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.971 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.972 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98f694e3-b0, col_values=(('external_ids', {'iface-id': 'acacffcb-4de9-40c5-aeef-3e5766b557e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:50:44 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:50:44.972 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.092 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.175 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769554245.1747713, 5409358c-78dc-4761-841a-7f453c6209fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.175 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] VM Started (Lifecycle Event)
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.193 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.198 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769554245.1748624, 5409358c-78dc-4761-841a-7f453c6209fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.199 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] VM Paused (Lifecycle Event)
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.215 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.220 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.236 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.514 185654 DEBUG nova.compute.manager [req-77246bfc-9545-45b1-b361-d56c2506837a req-61521577-05ca-40d1-92b3-20fa099b61fb b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Received event network-vif-plugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.516 185654 DEBUG oslo_concurrency.lockutils [req-77246bfc-9545-45b1-b361-d56c2506837a req-61521577-05ca-40d1-92b3-20fa099b61fb b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "5409358c-78dc-4761-841a-7f453c6209fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.517 185654 DEBUG oslo_concurrency.lockutils [req-77246bfc-9545-45b1-b361-d56c2506837a req-61521577-05ca-40d1-92b3-20fa099b61fb b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.517 185654 DEBUG oslo_concurrency.lockutils [req-77246bfc-9545-45b1-b361-d56c2506837a req-61521577-05ca-40d1-92b3-20fa099b61fb b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.518 185654 DEBUG nova.compute.manager [req-77246bfc-9545-45b1-b361-d56c2506837a req-61521577-05ca-40d1-92b3-20fa099b61fb b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Processing event network-vif-plugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.519 185654 DEBUG nova.compute.manager [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.523 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769554245.5227988, 5409358c-78dc-4761-841a-7f453c6209fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.524 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] VM Resumed (Lifecycle Event)
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.528 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.532 185654 INFO nova.virt.libvirt.driver [-] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Instance spawned successfully.
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.533 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.552 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.561 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.568 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.569 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.569 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.570 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.571 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.572 185654 DEBUG nova.virt.libvirt.driver [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.579 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.618 185654 INFO nova.compute.manager [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Took 7.09 seconds to spawn the instance on the hypervisor.
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.618 185654 DEBUG nova.compute.manager [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.671 185654 INFO nova.compute.manager [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Took 7.59 seconds to build instance.
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.685 185654 DEBUG oslo_concurrency.lockutils [None req-29316b19-e18f-42a6-b9dd-df7767764787 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.828 185654 DEBUG nova.network.neutron [req-7317afaf-20b5-403f-8298-8ec58fb92283 req-49679b10-176b-4580-b3c5-5a16fc880d7a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Updated VIF entry in instance network info cache for port ccfe58e9-3ff7-4073-9f9f-c8e641661ba0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.829 185654 DEBUG nova.network.neutron [req-7317afaf-20b5-403f-8298-8ec58fb92283 req-49679b10-176b-4580-b3c5-5a16fc880d7a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Updating instance_info_cache with network_info: [{"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:50:45 compute-0 nova_compute[185650]: 2026-01-27 22:50:45.845 185654 DEBUG oslo_concurrency.lockutils [req-7317afaf-20b5-403f-8298-8ec58fb92283 req-49679b10-176b-4580-b3c5-5a16fc880d7a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:50:46 compute-0 podman[241678]: 2026-01-27 22:50:46.396278364 +0000 UTC m=+0.100968927 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:50:46 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 22:50:47 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 22:50:47 compute-0 nova_compute[185650]: 2026-01-27 22:50:47.599 185654 DEBUG nova.compute.manager [req-cd9d9bdb-a2af-4d7e-9dc7-ca27e041035b req-aae9cebd-f9a0-4c47-812d-acba3569cdc1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Received event network-vif-plugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:50:47 compute-0 nova_compute[185650]: 2026-01-27 22:50:47.599 185654 DEBUG oslo_concurrency.lockutils [req-cd9d9bdb-a2af-4d7e-9dc7-ca27e041035b req-aae9cebd-f9a0-4c47-812d-acba3569cdc1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "5409358c-78dc-4761-841a-7f453c6209fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:47 compute-0 nova_compute[185650]: 2026-01-27 22:50:47.599 185654 DEBUG oslo_concurrency.lockutils [req-cd9d9bdb-a2af-4d7e-9dc7-ca27e041035b req-aae9cebd-f9a0-4c47-812d-acba3569cdc1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:47 compute-0 nova_compute[185650]: 2026-01-27 22:50:47.599 185654 DEBUG oslo_concurrency.lockutils [req-cd9d9bdb-a2af-4d7e-9dc7-ca27e041035b req-aae9cebd-f9a0-4c47-812d-acba3569cdc1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:47 compute-0 nova_compute[185650]: 2026-01-27 22:50:47.600 185654 DEBUG nova.compute.manager [req-cd9d9bdb-a2af-4d7e-9dc7-ca27e041035b req-aae9cebd-f9a0-4c47-812d-acba3569cdc1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] No waiting events found dispatching network-vif-plugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 22:50:47 compute-0 nova_compute[185650]: 2026-01-27 22:50:47.600 185654 WARNING nova.compute.manager [req-cd9d9bdb-a2af-4d7e-9dc7-ca27e041035b req-aae9cebd-f9a0-4c47-812d-acba3569cdc1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Received unexpected event network-vif-plugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 for instance with vm_state active and task_state None.
Jan 27 22:50:48 compute-0 nova_compute[185650]: 2026-01-27 22:50:48.557 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:49 compute-0 podman[241720]: 2026-01-27 22:50:49.429603437 +0000 UTC m=+0.125790680 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal)
Jan 27 22:50:50 compute-0 nova_compute[185650]: 2026-01-27 22:50:50.095 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:50 compute-0 nova_compute[185650]: 2026-01-27 22:50:50.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:50:51 compute-0 nova_compute[185650]: 2026-01-27 22:50:51.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:50:51 compute-0 nova_compute[185650]: 2026-01-27 22:50:51.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:50:52 compute-0 nova_compute[185650]: 2026-01-27 22:50:52.346 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:50:52 compute-0 nova_compute[185650]: 2026-01-27 22:50:52.347 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:50:52 compute-0 nova_compute[185650]: 2026-01-27 22:50:52.347 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.360 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Updating instance_info_cache with network_info: [{"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.376 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.377 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.378 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.378 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.379 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.397 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.398 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.398 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.399 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.493 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.554 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.556 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.576 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.616 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.618 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.674 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.675 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.732 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.740 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.810 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.811 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.869 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.871 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.928 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.930 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.983 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:53 compute-0 nova_compute[185650]: 2026-01-27 22:50:53.989 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.045 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.046 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.128 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.130 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.204 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.205 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.256 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.267 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.331 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.334 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.393 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.396 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.456 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.459 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.529 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.899 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.900 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4729MB free_disk=72.37753677368164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.900 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.900 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.983 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.983 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance d2c3fc6f-7629-469b-be68-8fe07acabe0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.984 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance dd624b81-38f5-46aa-881b-ca66ace64fd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.984 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 5409358c-78dc-4761-841a-7f453c6209fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.984 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:50:54 compute-0 nova_compute[185650]: 2026-01-27 22:50:54.984 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:50:55 compute-0 nova_compute[185650]: 2026-01-27 22:50:55.098 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:55 compute-0 nova_compute[185650]: 2026-01-27 22:50:55.127 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:50:55 compute-0 nova_compute[185650]: 2026-01-27 22:50:55.143 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:50:55 compute-0 nova_compute[185650]: 2026-01-27 22:50:55.162 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:50:55 compute-0 nova_compute[185650]: 2026-01-27 22:50:55.162 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:50:55 compute-0 nova_compute[185650]: 2026-01-27 22:50:55.778 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:50:55 compute-0 nova_compute[185650]: 2026-01-27 22:50:55.988 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:50:55 compute-0 nova_compute[185650]: 2026-01-27 22:50:55.988 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:50:56 compute-0 nova_compute[185650]: 2026-01-27 22:50:56.021 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:50:56 compute-0 nova_compute[185650]: 2026-01-27 22:50:56.022 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:50:57 compute-0 nova_compute[185650]: 2026-01-27 22:50:57.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:50:58 compute-0 nova_compute[185650]: 2026-01-27 22:50:58.579 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:50:59 compute-0 podman[201529]: time="2026-01-27T22:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:50:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:50:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Jan 27 22:51:00 compute-0 nova_compute[185650]: 2026-01-27 22:51:00.101 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:00 compute-0 podman[241796]: 2026-01-27 22:51:00.396435936 +0000 UTC m=+0.096748187 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 22:51:00 compute-0 podman[241795]: 2026-01-27 22:51:00.420064245 +0000 UTC m=+0.119462840 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 22:51:01 compute-0 openstack_network_exporter[204648]: ERROR   22:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:51:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:51:01 compute-0 openstack_network_exporter[204648]: ERROR   22:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:51:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:51:03 compute-0 nova_compute[185650]: 2026-01-27 22:51:03.580 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:51:04.140 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:51:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:51:04.140 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:51:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:51:04.141 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:51:05 compute-0 nova_compute[185650]: 2026-01-27 22:51:05.103 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:05 compute-0 podman[241832]: 2026-01-27 22:51:05.369981059 +0000 UTC m=+0.070428133 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:51:07 compute-0 podman[241857]: 2026-01-27 22:51:07.385642291 +0000 UTC m=+0.092410845 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi)
Jan 27 22:51:08 compute-0 nova_compute[185650]: 2026-01-27 22:51:08.583 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:10 compute-0 nova_compute[185650]: 2026-01-27 22:51:10.106 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:10 compute-0 podman[241877]: 2026-01-27 22:51:10.398092314 +0000 UTC m=+0.096466341 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, release-0.7.12=, managed_by=edpm_ansible, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0)
Jan 27 22:51:10 compute-0 podman[241878]: 2026-01-27 22:51:10.436179021 +0000 UTC m=+0.132750676 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 27 22:51:13 compute-0 nova_compute[185650]: 2026-01-27 22:51:13.587 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:14 compute-0 ovn_controller[98048]: 2026-01-27T22:51:14Z|00049|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Jan 27 22:51:15 compute-0 nova_compute[185650]: 2026-01-27 22:51:15.109 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:17 compute-0 podman[241921]: 2026-01-27 22:51:17.3700973 +0000 UTC m=+0.073560551 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:51:18 compute-0 nova_compute[185650]: 2026-01-27 22:51:18.591 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:20 compute-0 nova_compute[185650]: 2026-01-27 22:51:20.109 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:20 compute-0 ovn_controller[98048]: 2026-01-27T22:51:20Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:dc:a3 192.168.0.99
Jan 27 22:51:20 compute-0 ovn_controller[98048]: 2026-01-27T22:51:20Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:dc:a3 192.168.0.99
Jan 27 22:51:20 compute-0 podman[241960]: 2026-01-27 22:51:20.372269992 +0000 UTC m=+0.074992173 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-type=git, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 22:51:23 compute-0 nova_compute[185650]: 2026-01-27 22:51:23.594 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:25 compute-0 nova_compute[185650]: 2026-01-27 22:51:25.111 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:28 compute-0 nova_compute[185650]: 2026-01-27 22:51:28.597 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:29 compute-0 podman[201529]: time="2026-01-27T22:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:51:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:51:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4366 "" "Go-http-client/1.1"
Jan 27 22:51:30 compute-0 nova_compute[185650]: 2026-01-27 22:51:30.114 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:31 compute-0 podman[241982]: 2026-01-27 22:51:31.395437656 +0000 UTC m=+0.089195245 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 22:51:31 compute-0 podman[241983]: 2026-01-27 22:51:31.416091949 +0000 UTC m=+0.104668201 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 27 22:51:31 compute-0 openstack_network_exporter[204648]: ERROR   22:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:51:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:51:31 compute-0 openstack_network_exporter[204648]: ERROR   22:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:51:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:51:33 compute-0 nova_compute[185650]: 2026-01-27 22:51:33.600 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:35 compute-0 nova_compute[185650]: 2026-01-27 22:51:35.117 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:36 compute-0 podman[242018]: 2026-01-27 22:51:36.378678112 +0000 UTC m=+0.070216128 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.105 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.105 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.112 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd2c3fc6f-7629-469b-be68-8fe07acabe0f', 'name': 'vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.115 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 5409358c-78dc-4761-841a-7f453c6209fb from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.116 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/5409358c-78dc-4761-841a-7f453c6209fb -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}154da27a0715c4500fb4356c9136f029f6352e657551e62d11427d8299e729cc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.119 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.119 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.119 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:38.119 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:51:38 compute-0 podman[242042]: 2026-01-27 22:51:38.36552762 +0000 UTC m=+0.070452634 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 22:51:38 compute-0 nova_compute[185650]: 2026-01-27 22:51:38.603 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:40 compute-0 nova_compute[185650]: 2026-01-27 22:51:40.119 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.401 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Tue, 27 Jan 2026 22:51:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-8cc4b5b2-2f0b-42f3-80fc-6249b7873470 x-openstack-request-id: req-8cc4b5b2-2f0b-42f3-80fc-6249b7873470 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.401 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "5409358c-78dc-4761-841a-7f453c6209fb", "name": "vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem", "status": "ACTIVE", "tenant_id": "8318d5a200d74e4386cf4972db015b75", "user_id": "7387204f74504e288ed7a5dee73f5083", "metadata": {"metering.server_group": "3b67098f-eb50-41e2-8c8a-348367561673"}, "hostId": "6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea", "image": {"id": "7e803ca7-2382-4e5a-95f7-55acaa154415", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7e803ca7-2382-4e5a-95f7-55acaa154415"}]}, "flavor": {"id": "c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093"}]}, "created": "2026-01-27T22:50:36Z", "updated": "2026-01-27T22:50:45Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.99", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:17:dc:a3"}, {"version": 4, "addr": "192.168.122.238", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:17:dc:a3"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/5409358c-78dc-4761-841a-7f453c6209fb"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/5409358c-78dc-4761-841a-7f453c6209fb"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-27T22:50:45.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.402 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/5409358c-78dc-4761-841a-7f453c6209fb used request id req-8cc4b5b2-2f0b-42f3-80fc-6249b7873470 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.403 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5409358c-78dc-4761-841a-7f453c6209fb', 'name': 'vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.407 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '344c74c3-95d6-4f19-993f-b4a89c9d074b', 'name': 'test_0', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.411 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dd624b81-38f5-46aa-881b-ca66ace64fd3', 'name': 'vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.412 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c646060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c646060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T22:51:40.412541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.420 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.425 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 5409358c-78dc-4761-841a-7f453c6209fb / tapccfe58e9-3f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.425 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.429 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.433 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.434 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.435 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T22:51:40.434861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.435 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem>]
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.435 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T22:51:40.436107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.436 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.436 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.437 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.437 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.438 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T22:51:40.438337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.438 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.439 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.439 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.439 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.440 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.440 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.440 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T22:51:40.440532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.440 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.522 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 1582357831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.522 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 10486324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.523 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.604 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 2030470458 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.605 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 9512100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.605 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.673 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 1982773015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.674 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 11972381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.675 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.744 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 1883856140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.744 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 12652797 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.744 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.796 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.796 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.796 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.797 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.797 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.797 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.797 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.797 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes volume: 2046 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.798 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.798 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.798 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.798 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.798 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.798 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.798 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.799 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.799 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.799 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.799 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.800 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.800 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.800 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T22:51:40.797068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.800 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T22:51:40.798852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.800 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.800 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.801 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.801 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.801 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.802 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.802 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.802 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.802 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.802 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.802 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.803 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.803 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.803 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.803 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.804 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.804 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T22:51:40.802526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.804 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.804 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.804 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T22:51:40.804261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.832 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/cpu volume: 316200000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.854 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/cpu volume: 34450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.877 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/cpu volume: 38240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.896 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/cpu volume: 31400000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.897 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.897 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T22:51:40.898082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.899 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.900 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/memory.usage volume: 49.0234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T22:51:40.900178) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.900 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/memory.usage volume: 49.69140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.901 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/memory.usage volume: 48.765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.901 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.902 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.902 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T22:51:40.903284) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.904 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.904 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.904 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.905 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.905 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T22:51:40.905466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.931 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.932 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.932 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.951 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.951 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.952 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.973 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.974 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:40.974 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.000 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.000 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.001 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.001 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.001 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645490>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.001 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645490>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.002 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.002 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.002 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.002 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.002 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.003 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.003 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T22:51:41.002036) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.003 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.004 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.004 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.004 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.004 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.005 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.005 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 560972745 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.005 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 98708783 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.005 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 82244967 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T22:51:41.005295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.006 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 669467296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.006 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 92088857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.006 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 79077409 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.006 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 603707572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.006 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 113814738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.006 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 101138361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.007 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 587344116 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.007 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 100532473 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.007 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 196826454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.008 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.008 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.008 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.008 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.008 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.009 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.009 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.009 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.009 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T22:51:41.008298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.010 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.010 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.010 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.010 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.011 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.011 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.011 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.012 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T22:51:41.011507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.012 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.012 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.013 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T22:51:41.013020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.013 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.013 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.014 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.014 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.014 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 21430272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T22:51:41.014630) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.015 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.015 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.015 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.015 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.015 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.016 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.016 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.016 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.016 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.016 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.017 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.017 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.018 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 41910272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T22:51:41.018082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.018 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.018 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.018 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.019 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.019 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.019 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.019 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.019 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.019 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.020 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.020 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.020 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.021 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.021 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.021 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.021 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T22:51:41.021068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.022 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.023 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.bytes volume: 7634 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.023 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.bytes volume: 1991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.023 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.023 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T22:51:41.022998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.024 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.024 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.bytes.delta volume: 2672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.024 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.025 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T22:51:41.024640) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.025 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.bytes.delta volume: 295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.026 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.026 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets volume: 67 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.026 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T22:51:41.026322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.027 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.027 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.027 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.028 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T22:51:41.027991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.028 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.028 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.028 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.028 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.029 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.029 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.029 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.029 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.029 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.030 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.030 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.030 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.030 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.031 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.031 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem>]
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.031 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T22:51:41.030986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:51:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:51:41 compute-0 podman[242062]: 2026-01-27 22:51:41.392366654 +0000 UTC m=+0.093960650 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 22:51:41 compute-0 podman[242063]: 2026-01-27 22:51:41.467072917 +0000 UTC m=+0.158239208 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 27 22:51:43 compute-0 nova_compute[185650]: 2026-01-27 22:51:43.605 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:45 compute-0 nova_compute[185650]: 2026-01-27 22:51:45.121 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:48 compute-0 podman[242105]: 2026-01-27 22:51:48.387622138 +0000 UTC m=+0.086755819 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:51:48 compute-0 nova_compute[185650]: 2026-01-27 22:51:48.607 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:50 compute-0 nova_compute[185650]: 2026-01-27 22:51:50.124 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:51 compute-0 podman[242130]: 2026-01-27 22:51:51.381859782 +0000 UTC m=+0.075009701 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41)
Jan 27 22:51:51 compute-0 nova_compute[185650]: 2026-01-27 22:51:51.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:51:52 compute-0 nova_compute[185650]: 2026-01-27 22:51:52.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.049 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.050 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.051 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.051 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.151 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.226 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.228 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.284 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.286 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.345 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.346 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.404 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.410 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.467 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.469 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.526 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.527 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.589 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.590 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.609 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.650 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.655 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.715 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.716 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.777 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.778 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.841 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.842 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.904 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.910 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.968 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:53 compute-0 nova_compute[185650]: 2026-01-27 22:51:53.969 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:54 compute-0 nova_compute[185650]: 2026-01-27 22:51:54.024 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:54 compute-0 nova_compute[185650]: 2026-01-27 22:51:54.026 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:54 compute-0 nova_compute[185650]: 2026-01-27 22:51:54.093 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:54 compute-0 nova_compute[185650]: 2026-01-27 22:51:54.095 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:51:54 compute-0 nova_compute[185650]: 2026-01-27 22:51:54.158 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:51:54 compute-0 nova_compute[185650]: 2026-01-27 22:51:54.536 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:51:54 compute-0 nova_compute[185650]: 2026-01-27 22:51:54.537 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4624MB free_disk=72.35653305053711GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:51:54 compute-0 nova_compute[185650]: 2026-01-27 22:51:54.538 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:51:54 compute-0 nova_compute[185650]: 2026-01-27 22:51:54.538 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:51:55 compute-0 nova_compute[185650]: 2026-01-27 22:51:55.105 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:51:55 compute-0 nova_compute[185650]: 2026-01-27 22:51:55.105 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance d2c3fc6f-7629-469b-be68-8fe07acabe0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:51:55 compute-0 nova_compute[185650]: 2026-01-27 22:51:55.106 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance dd624b81-38f5-46aa-881b-ca66ace64fd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:51:55 compute-0 nova_compute[185650]: 2026-01-27 22:51:55.106 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 5409358c-78dc-4761-841a-7f453c6209fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:51:55 compute-0 nova_compute[185650]: 2026-01-27 22:51:55.106 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:51:55 compute-0 nova_compute[185650]: 2026-01-27 22:51:55.106 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:51:55 compute-0 nova_compute[185650]: 2026-01-27 22:51:55.127 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:55 compute-0 nova_compute[185650]: 2026-01-27 22:51:55.178 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:51:55 compute-0 nova_compute[185650]: 2026-01-27 22:51:55.192 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:51:55 compute-0 nova_compute[185650]: 2026-01-27 22:51:55.193 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:51:55 compute-0 nova_compute[185650]: 2026-01-27 22:51:55.193 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:51:56 compute-0 nova_compute[185650]: 2026-01-27 22:51:56.193 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:51:56 compute-0 nova_compute[185650]: 2026-01-27 22:51:56.194 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:51:56 compute-0 nova_compute[185650]: 2026-01-27 22:51:56.194 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:51:56 compute-0 nova_compute[185650]: 2026-01-27 22:51:56.396 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:51:56 compute-0 nova_compute[185650]: 2026-01-27 22:51:56.397 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:51:56 compute-0 nova_compute[185650]: 2026-01-27 22:51:56.397 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:51:56 compute-0 nova_compute[185650]: 2026-01-27 22:51:56.397 185654 DEBUG nova.objects.instance [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:51:57 compute-0 nova_compute[185650]: 2026-01-27 22:51:57.231 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:51:57 compute-0 nova_compute[185650]: 2026-01-27 22:51:57.248 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:51:57 compute-0 nova_compute[185650]: 2026-01-27 22:51:57.249 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:51:57 compute-0 nova_compute[185650]: 2026-01-27 22:51:57.250 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:51:57 compute-0 nova_compute[185650]: 2026-01-27 22:51:57.251 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:51:57 compute-0 nova_compute[185650]: 2026-01-27 22:51:57.252 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:51:57 compute-0 nova_compute[185650]: 2026-01-27 22:51:57.253 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:51:57 compute-0 nova_compute[185650]: 2026-01-27 22:51:57.254 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:51:58 compute-0 nova_compute[185650]: 2026-01-27 22:51:58.049 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:51:58 compute-0 nova_compute[185650]: 2026-01-27 22:51:58.612 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:51:59 compute-0 podman[201529]: time="2026-01-27T22:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:51:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:51:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4371 "" "Go-http-client/1.1"
Jan 27 22:51:59 compute-0 nova_compute[185650]: 2026-01-27 22:51:59.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:52:00 compute-0 nova_compute[185650]: 2026-01-27 22:52:00.129 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:01 compute-0 openstack_network_exporter[204648]: ERROR   22:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:52:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:52:01 compute-0 openstack_network_exporter[204648]: ERROR   22:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:52:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:52:02 compute-0 podman[242199]: 2026-01-27 22:52:02.356552118 +0000 UTC m=+0.060561313 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 27 22:52:02 compute-0 podman[242200]: 2026-01-27 22:52:02.375883369 +0000 UTC m=+0.076532453 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126)
Jan 27 22:52:03 compute-0 nova_compute[185650]: 2026-01-27 22:52:03.616 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:52:04.141 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:52:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:52:04.141 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:52:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:52:04.142 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:52:05 compute-0 nova_compute[185650]: 2026-01-27 22:52:05.131 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:07 compute-0 podman[242237]: 2026-01-27 22:52:07.374924259 +0000 UTC m=+0.068315061 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:52:08 compute-0 nova_compute[185650]: 2026-01-27 22:52:08.619 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:09 compute-0 podman[242261]: 2026-01-27 22:52:09.403586209 +0000 UTC m=+0.095354870 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 22:52:10 compute-0 nova_compute[185650]: 2026-01-27 22:52:10.133 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:12 compute-0 podman[242280]: 2026-01-27 22:52:12.383198709 +0000 UTC m=+0.085213306 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, version=9.4, distribution-scope=public, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, release=1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vcs-type=git)
Jan 27 22:52:12 compute-0 podman[242281]: 2026-01-27 22:52:12.419945819 +0000 UTC m=+0.115310487 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 22:52:13 compute-0 nova_compute[185650]: 2026-01-27 22:52:13.623 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:15 compute-0 nova_compute[185650]: 2026-01-27 22:52:15.136 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:18 compute-0 nova_compute[185650]: 2026-01-27 22:52:18.626 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:19 compute-0 podman[242326]: 2026-01-27 22:52:19.391245521 +0000 UTC m=+0.075825153 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:52:20 compute-0 nova_compute[185650]: 2026-01-27 22:52:20.141 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:22 compute-0 podman[242350]: 2026-01-27 22:52:22.399253278 +0000 UTC m=+0.093126651 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Jan 27 22:52:23 compute-0 nova_compute[185650]: 2026-01-27 22:52:23.631 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:25 compute-0 nova_compute[185650]: 2026-01-27 22:52:25.144 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:28 compute-0 nova_compute[185650]: 2026-01-27 22:52:28.634 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:29 compute-0 podman[201529]: time="2026-01-27T22:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:52:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:52:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4362 "" "Go-http-client/1.1"
Jan 27 22:52:30 compute-0 nova_compute[185650]: 2026-01-27 22:52:30.145 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:31 compute-0 openstack_network_exporter[204648]: ERROR   22:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:52:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:52:31 compute-0 openstack_network_exporter[204648]: ERROR   22:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:52:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:52:33 compute-0 podman[242372]: 2026-01-27 22:52:33.412288467 +0000 UTC m=+0.105726010 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:52:33 compute-0 podman[242373]: 2026-01-27 22:52:33.435606905 +0000 UTC m=+0.111176007 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 27 22:52:33 compute-0 nova_compute[185650]: 2026-01-27 22:52:33.638 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:35 compute-0 nova_compute[185650]: 2026-01-27 22:52:35.147 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:38 compute-0 podman[242411]: 2026-01-27 22:52:38.38801851 +0000 UTC m=+0.083481131 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:52:38 compute-0 nova_compute[185650]: 2026-01-27 22:52:38.641 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:40 compute-0 nova_compute[185650]: 2026-01-27 22:52:40.149 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:40 compute-0 podman[242435]: 2026-01-27 22:52:40.383245018 +0000 UTC m=+0.090120139 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 27 22:52:43 compute-0 podman[242456]: 2026-01-27 22:52:43.390941195 +0000 UTC m=+0.078082295 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_id=kepler, release=1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Jan 27 22:52:43 compute-0 podman[242457]: 2026-01-27 22:52:43.431666452 +0000 UTC m=+0.112845231 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:52:43 compute-0 nova_compute[185650]: 2026-01-27 22:52:43.644 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:45 compute-0 nova_compute[185650]: 2026-01-27 22:52:45.151 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:47 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 22:52:48 compute-0 nova_compute[185650]: 2026-01-27 22:52:48.649 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:50 compute-0 nova_compute[185650]: 2026-01-27 22:52:50.153 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:50 compute-0 podman[242503]: 2026-01-27 22:52:50.431530284 +0000 UTC m=+0.122531813 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:52:53 compute-0 podman[242525]: 2026-01-27 22:52:53.358396824 +0000 UTC m=+0.065606280 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal)
Jan 27 22:52:53 compute-0 nova_compute[185650]: 2026-01-27 22:52:53.653 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:53 compute-0 nova_compute[185650]: 2026-01-27 22:52:53.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:52:54 compute-0 nova_compute[185650]: 2026-01-27 22:52:54.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:52:54 compute-0 nova_compute[185650]: 2026-01-27 22:52:54.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:52:55 compute-0 nova_compute[185650]: 2026-01-27 22:52:55.155 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:55 compute-0 nova_compute[185650]: 2026-01-27 22:52:55.499 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:52:55 compute-0 nova_compute[185650]: 2026-01-27 22:52:55.500 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:52:55 compute-0 nova_compute[185650]: 2026-01-27 22:52:55.500 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.647 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updating instance_info_cache with network_info: [{"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.665 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.665 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.666 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.667 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.668 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.668 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.668 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.669 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.696 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.696 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.697 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.697 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.801 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.882 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.883 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.961 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:56 compute-0 nova_compute[185650]: 2026-01-27 22:52:56.962 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.028 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.029 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.107 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.114 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.173 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.175 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.231 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.233 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.296 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.298 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.357 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.366 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.423 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.424 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.480 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.481 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.539 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.541 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.614 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.621 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.683 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.684 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.760 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.761 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.821 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.822 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:52:57 compute-0 nova_compute[185650]: 2026-01-27 22:52:57.903 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.249 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.250 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4604MB free_disk=72.35653305053711GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.250 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.250 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.336 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.336 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance d2c3fc6f-7629-469b-be68-8fe07acabe0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.336 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance dd624b81-38f5-46aa-881b-ca66ace64fd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.336 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 5409358c-78dc-4761-841a-7f453c6209fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.337 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.337 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.425 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.439 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.441 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.441 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:52:58 compute-0 nova_compute[185650]: 2026-01-27 22:52:58.655 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:52:59 compute-0 podman[201529]: time="2026-01-27T22:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:52:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:52:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Jan 27 22:53:00 compute-0 nova_compute[185650]: 2026-01-27 22:53:00.158 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:00 compute-0 nova_compute[185650]: 2026-01-27 22:53:00.436 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:53:00 compute-0 nova_compute[185650]: 2026-01-27 22:53:00.437 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:53:00 compute-0 nova_compute[185650]: 2026-01-27 22:53:00.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:53:01 compute-0 openstack_network_exporter[204648]: ERROR   22:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:53:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:53:01 compute-0 openstack_network_exporter[204648]: ERROR   22:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:53:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:53:03 compute-0 nova_compute[185650]: 2026-01-27 22:53:03.658 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:53:04.142 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:53:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:53:04.142 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:53:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:53:04.143 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:53:04 compute-0 podman[242597]: 2026-01-27 22:53:04.420196517 +0000 UTC m=+0.109079994 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 22:53:04 compute-0 podman[242598]: 2026-01-27 22:53:04.423918156 +0000 UTC m=+0.097574697 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20260126)
Jan 27 22:53:05 compute-0 nova_compute[185650]: 2026-01-27 22:53:05.160 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:08 compute-0 nova_compute[185650]: 2026-01-27 22:53:08.661 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:09 compute-0 podman[242635]: 2026-01-27 22:53:09.371312451 +0000 UTC m=+0.072407447 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 22:53:10 compute-0 nova_compute[185650]: 2026-01-27 22:53:10.163 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:11 compute-0 podman[242657]: 2026-01-27 22:53:11.37908969 +0000 UTC m=+0.074002661 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 22:53:13 compute-0 nova_compute[185650]: 2026-01-27 22:53:13.663 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:14 compute-0 podman[242677]: 2026-01-27 22:53:14.385836687 +0000 UTC m=+0.085307463 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=kepler, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 22:53:14 compute-0 podman[242678]: 2026-01-27 22:53:14.409483626 +0000 UTC m=+0.106447168 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:53:15 compute-0 nova_compute[185650]: 2026-01-27 22:53:15.165 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:18 compute-0 nova_compute[185650]: 2026-01-27 22:53:18.667 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:20 compute-0 nova_compute[185650]: 2026-01-27 22:53:20.167 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:21 compute-0 podman[242722]: 2026-01-27 22:53:21.401547728 +0000 UTC m=+0.103177457 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 22:53:23 compute-0 nova_compute[185650]: 2026-01-27 22:53:23.669 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:24 compute-0 podman[242745]: 2026-01-27 22:53:24.159432658 +0000 UTC m=+0.116766249 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, version=9.6, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Jan 27 22:53:25 compute-0 nova_compute[185650]: 2026-01-27 22:53:25.169 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:28 compute-0 nova_compute[185650]: 2026-01-27 22:53:28.673 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:29 compute-0 podman[201529]: time="2026-01-27T22:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:53:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:53:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 27 22:53:30 compute-0 nova_compute[185650]: 2026-01-27 22:53:30.171 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:31 compute-0 openstack_network_exporter[204648]: ERROR   22:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:53:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:53:31 compute-0 openstack_network_exporter[204648]: ERROR   22:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:53:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:53:33 compute-0 nova_compute[185650]: 2026-01-27 22:53:33.677 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:35 compute-0 nova_compute[185650]: 2026-01-27 22:53:35.173 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:35 compute-0 podman[242766]: 2026-01-27 22:53:35.376581095 +0000 UTC m=+0.068460422 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260126, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 27 22:53:35 compute-0 podman[242765]: 2026-01-27 22:53:35.404901323 +0000 UTC m=+0.092857686 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.105 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.106 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.114 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd2c3fc6f-7629-469b-be68-8fe07acabe0f', 'name': 'vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.119 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5409358c-78dc-4761-841a-7f453c6209fb', 'name': 'vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.119 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.123 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.123 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.124 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.124 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.124 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.127 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '344c74c3-95d6-4f19-993f-b4a89c9d074b', 'name': 'test_0', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.131 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dd624b81-38f5-46aa-881b-ca66ace64fd3', 'name': 'vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.131 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.131 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c646060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.131 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c646060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.132 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T22:53:38.132016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.136 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.141 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.147 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.152 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.153 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.154 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.154 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.154 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.155 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.155 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.155 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.156 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.156 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.157 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T22:53:38.155410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.158 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.158 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.159 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T22:53:38.159067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.159 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.160 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.160 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T22:53:38.161873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.243 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 1582357831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.243 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 10486324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.243 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.312 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 2048805649 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.312 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 9512100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.313 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.387 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 1982773015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.388 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 11972381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.388 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.466 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 1883856140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.467 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 12652797 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.467 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.469 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.469 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.469 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.470 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes volume: 2046 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.470 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.471 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T22:53:38.469389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.471 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.471 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.472 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.472 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T22:53:38.472088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.472 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.473 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.473 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.474 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.474 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.474 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.474 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.475 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.475 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.475 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.476 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.476 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.476 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.477 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.477 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.477 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.478 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.478 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.478 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T22:53:38.477166) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.479 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.479 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T22:53:38.479745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.479 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.507 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/cpu volume: 317600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.534 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/cpu volume: 35810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.564 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/cpu volume: 39600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.588 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/cpu volume: 32790000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.591 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T22:53:38.589572) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T22:53:38.591623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.592 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/memory.usage volume: 49.0234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.592 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.592 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/memory.usage volume: 48.765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.592 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.593 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T22:53:38.594020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.595 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T22:53:38.595963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.622 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.622 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.622 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.650 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.650 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.650 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 nova_compute[185650]: 2026-01-27 22:53:38.680 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.680 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.681 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.687 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.716 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.716 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.717 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.718 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645490>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645490>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.718 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T22:53:38.718436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.719 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.719 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.719 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.719 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.720 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.720 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.720 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.721 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.721 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.721 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.721 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.722 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.722 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.723 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 560972745 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T22:53:38.723280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.724 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 98708783 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.724 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.latency volume: 82244967 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.724 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 669467296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.725 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 92088857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.725 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 79077409 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.725 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 603707572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.725 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 113814738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.726 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 101138361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.726 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 587344116 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.726 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 100532473 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.727 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 196826454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.728 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.728 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.729 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.729 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T22:53:38.728592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.729 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.730 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.730 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.730 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.731 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.731 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.731 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.732 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.732 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.733 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.733 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.734 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.bytes.delta volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.734 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T22:53:38.733512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.734 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.734 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.735 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.735 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.736 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.736 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.736 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.736 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.737 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.738 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 21430272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.738 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T22:53:38.735963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T22:53:38.738156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.739 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.739 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.739 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.740 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.740 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.740 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.741 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.741 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.741 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.742 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.743 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T22:53:38.743434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.743 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 41910272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.744 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.744 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.745 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.745 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.745 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.746 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.746 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.746 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.747 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.747 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.747 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.748 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.748 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.748 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.748 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.748 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.749 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.749 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.749 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.749 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.750 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.750 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T22:53:38.748982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.750 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.751 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.751 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.751 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.751 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.bytes volume: 7704 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.752 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T22:53:38.751595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.752 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.752 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.753 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.753 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.753 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.754 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.754 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.754 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.754 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.bytes.delta volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.755 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T22:53:38.754154) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.755 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.755 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.756 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.756 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.756 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.756 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/network.outgoing.packets volume: 68 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T22:53:38.756412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.757 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.757 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.757 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.758 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.758 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.758 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.758 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.759 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.759 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.759 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T22:53:38.758970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.759 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.759 14 DEBUG ceilometer.compute.pollsters [-] d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.760 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.760 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.760 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.761 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.761 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.761 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.762 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.762 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.762 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.763 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.763 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:53:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:53:40 compute-0 nova_compute[185650]: 2026-01-27 22:53:40.176 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:40 compute-0 podman[242805]: 2026-01-27 22:53:40.417815662 +0000 UTC m=+0.112816465 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:53:42 compute-0 podman[242827]: 2026-01-27 22:53:42.409006743 +0000 UTC m=+0.098246411 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 27 22:53:43 compute-0 nova_compute[185650]: 2026-01-27 22:53:43.684 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:44 compute-0 podman[242848]: 2026-01-27 22:53:44.785004647 +0000 UTC m=+0.098457676 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_id=kepler, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, managed_by=edpm_ansible, vcs-type=git, container_name=kepler)
Jan 27 22:53:44 compute-0 podman[242849]: 2026-01-27 22:53:44.82405654 +0000 UTC m=+0.127655773 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 27 22:53:45 compute-0 nova_compute[185650]: 2026-01-27 22:53:45.181 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:48 compute-0 nova_compute[185650]: 2026-01-27 22:53:48.687 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:50 compute-0 nova_compute[185650]: 2026-01-27 22:53:50.187 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:52 compute-0 podman[242894]: 2026-01-27 22:53:52.387294136 +0000 UTC m=+0.091395795 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 22:53:53 compute-0 nova_compute[185650]: 2026-01-27 22:53:53.691 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:54 compute-0 podman[242919]: 2026-01-27 22:53:54.35460227 +0000 UTC m=+0.062196606 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Jan 27 22:53:54 compute-0 nova_compute[185650]: 2026-01-27 22:53:54.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:53:54 compute-0 nova_compute[185650]: 2026-01-27 22:53:54.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:53:54 compute-0 nova_compute[185650]: 2026-01-27 22:53:54.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:53:55 compute-0 nova_compute[185650]: 2026-01-27 22:53:55.188 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:55 compute-0 nova_compute[185650]: 2026-01-27 22:53:55.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:53:56 compute-0 nova_compute[185650]: 2026-01-27 22:53:56.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:53:56 compute-0 nova_compute[185650]: 2026-01-27 22:53:56.995 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:53:57 compute-0 nova_compute[185650]: 2026-01-27 22:53:57.516 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:53:57 compute-0 nova_compute[185650]: 2026-01-27 22:53:57.516 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:53:57 compute-0 nova_compute[185650]: 2026-01-27 22:53:57.517 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:53:58 compute-0 nova_compute[185650]: 2026-01-27 22:53:58.694 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.067 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Updating instance_info_cache with network_info: [{"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.088 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.089 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.090 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.090 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.090 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.115 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.115 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.115 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.116 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.192 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.257 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.258 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.320 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.321 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.392 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.393 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.463 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.472 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.531 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.533 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.595 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.597 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.655 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.656 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.716 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.726 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 podman[201529]: time="2026-01-27T22:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:53:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:53:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.790 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.791 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.849 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.851 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.924 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.925 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.984 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:53:59 compute-0 nova_compute[185650]: 2026-01-27 22:53:59.992 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.055 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.057 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.115 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.116 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.177 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.178 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.201 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.270 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.612 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.613 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4610MB free_disk=72.3565902709961GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.614 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.614 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.689 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.690 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance d2c3fc6f-7629-469b-be68-8fe07acabe0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.690 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance dd624b81-38f5-46aa-881b-ca66ace64fd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.690 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 5409358c-78dc-4761-841a-7f453c6209fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.690 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.691 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.708 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing inventories for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.730 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating ProviderTree inventory for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.730 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating inventory in ProviderTree for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.744 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing aggregate associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.767 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing trait associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_AVX,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.852 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.866 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.868 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:54:00 compute-0 nova_compute[185650]: 2026-01-27 22:54:00.868 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:54:01 compute-0 openstack_network_exporter[204648]: ERROR   22:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:54:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:54:01 compute-0 openstack_network_exporter[204648]: ERROR   22:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:54:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:54:02 compute-0 nova_compute[185650]: 2026-01-27 22:54:02.771 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:54:02 compute-0 nova_compute[185650]: 2026-01-27 22:54:02.771 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:54:03 compute-0 nova_compute[185650]: 2026-01-27 22:54:03.698 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:04.143 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:54:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:04.144 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:54:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:04.145 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:54:05 compute-0 nova_compute[185650]: 2026-01-27 22:54:05.193 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:06 compute-0 podman[242988]: 2026-01-27 22:54:06.431446265 +0000 UTC m=+0.113246411 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 27 22:54:06 compute-0 podman[242989]: 2026-01-27 22:54:06.464985652 +0000 UTC m=+0.141447876 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute)
Jan 27 22:54:08 compute-0 nova_compute[185650]: 2026-01-27 22:54:08.701 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:10 compute-0 nova_compute[185650]: 2026-01-27 22:54:10.196 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:11 compute-0 podman[243024]: 2026-01-27 22:54:11.350126044 +0000 UTC m=+0.057482468 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 22:54:13 compute-0 podman[243049]: 2026-01-27 22:54:13.41041428 +0000 UTC m=+0.111906434 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 22:54:13 compute-0 nova_compute[185650]: 2026-01-27 22:54:13.704 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:15 compute-0 nova_compute[185650]: 2026-01-27 22:54:15.198 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:15 compute-0 podman[243069]: 2026-01-27 22:54:15.385037704 +0000 UTC m=+0.078688405 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.openshift.expose-services=, io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, version=9.4, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 22:54:15 compute-0 podman[243070]: 2026-01-27 22:54:15.423138503 +0000 UTC m=+0.115765977 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 22:54:18 compute-0 nova_compute[185650]: 2026-01-27 22:54:18.706 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:20 compute-0 nova_compute[185650]: 2026-01-27 22:54:20.202 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:22 compute-0 sshd-session[243115]: Received disconnect from 91.224.92.54 port 17400:11:  [preauth]
Jan 27 22:54:22 compute-0 sshd-session[243115]: Disconnected from authenticating user root 91.224.92.54 port 17400 [preauth]
Jan 27 22:54:23 compute-0 podman[243117]: 2026-01-27 22:54:23.380600509 +0000 UTC m=+0.082775586 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 22:54:23 compute-0 nova_compute[185650]: 2026-01-27 22:54:23.708 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:25 compute-0 nova_compute[185650]: 2026-01-27 22:54:25.204 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:25 compute-0 podman[243140]: 2026-01-27 22:54:25.375767904 +0000 UTC m=+0.075419749 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, distribution-scope=public, vcs-type=git, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Jan 27 22:54:28 compute-0 nova_compute[185650]: 2026-01-27 22:54:28.713 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.642 185654 DEBUG oslo_concurrency.lockutils [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.643 185654 DEBUG oslo_concurrency.lockutils [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.644 185654 DEBUG oslo_concurrency.lockutils [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.644 185654 DEBUG oslo_concurrency.lockutils [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.644 185654 DEBUG oslo_concurrency.lockutils [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.646 185654 INFO nova.compute.manager [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Terminating instance
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.647 185654 DEBUG nova.compute.manager [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 22:54:29 compute-0 kernel: tap2083900f-b7 (unregistering): left promiscuous mode
Jan 27 22:54:29 compute-0 NetworkManager[56600]: <info>  [1769554469.7014] device (tap2083900f-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 22:54:29 compute-0 ovn_controller[98048]: 2026-01-27T22:54:29Z|00050|binding|INFO|Releasing lport 2083900f-b759-4c97-8c34-5ad3832f0446 from this chassis (sb_readonly=0)
Jan 27 22:54:29 compute-0 ovn_controller[98048]: 2026-01-27T22:54:29Z|00051|binding|INFO|Setting lport 2083900f-b759-4c97-8c34-5ad3832f0446 down in Southbound
Jan 27 22:54:29 compute-0 ovn_controller[98048]: 2026-01-27T22:54:29Z|00052|binding|INFO|Removing iface tap2083900f-b7 ovn-installed in OVS
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.705 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.711 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:7c:56 192.168.0.225'], port_security=['fa:16:3e:27:7c:56 192.168.0.225'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e3ismbxiivp3-qxfwvjemo3rq-sawqp3hw5btx-port-crs66lsbh5mi', 'neutron:cidrs': '192.168.0.225/24', 'neutron:device_id': 'd2c3fc6f-7629-469b-be68-8fe07acabe0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98f694e3-becc-413f-b42b-35a7171f7f96', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e3ismbxiivp3-qxfwvjemo3rq-sawqp3hw5btx-port-crs66lsbh5mi', 'neutron:project_id': '8318d5a200d74e4386cf4972db015b75', 'neutron:revision_number': '4', 'neutron:security_group_ids': '597f1057-390b-408a-b8d0-705fb45de27b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d21d3e2-2f64-49c8-bca6-9efc66f5bd67, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=2083900f-b759-4c97-8c34-5ad3832f0446) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.713 107302 INFO neutron.agent.ovn.metadata.agent [-] Port 2083900f-b759-4c97-8c34-5ad3832f0446 in datapath 98f694e3-becc-413f-b42b-35a7171f7f96 unbound from our chassis
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.715 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98f694e3-becc-413f-b42b-35a7171f7f96
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.725 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.731 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c64749-289d-40da-8c80-bd0b696601c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:54:29 compute-0 podman[201529]: time="2026-01-27T22:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:54:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:54:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Jan 27 22:54:29 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 27 22:54:29 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 6min 18.269s CPU time.
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.767 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf907d0-86ee-4ee8-8ec3-cf7bab4c0ff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:54:29 compute-0 systemd-machined[157036]: Machine qemu-2-instance-00000002 terminated.
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.770 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[3117699e-ae53-4f48-9a65-52a831075d64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.807 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[ed67b28b-bbb3-4466-a4f9-dae741b196eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.825 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[13cf0f83-f691-4b5c-ac18-3077cdf1b94b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98f694e3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:25:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365000, 'reachable_time': 29787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243174, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.844 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc18794-7241-4f37-ae17-3dfcb2e50f11]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365013, 'tstamp': 365013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243175, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365017, 'tstamp': 365017}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243175, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.846 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f694e3-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.848 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.854 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.855 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98f694e3-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.856 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.856 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98f694e3-b0, col_values=(('external_ids', {'iface-id': 'acacffcb-4de9-40c5-aeef-3e5766b557e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:54:29 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:29.857 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.925 185654 INFO nova.virt.libvirt.driver [-] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Instance destroyed successfully.
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.926 185654 DEBUG nova.objects.instance [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'resources' on Instance uuid d2c3fc6f-7629-469b-be68-8fe07acabe0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.939 185654 DEBUG nova.virt.libvirt.vif [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T22:44:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-bxiivp3-qxfwvjemo3rq-sawqp3hw5btx-vnf-e5pqbtf6sduj',id=2,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T22:44:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='3b67098f-eb50-41e2-8c8a-348367561673'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-57dydbj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T22:44:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA3MDk3MDQ0NDExMDU0NDYzNjU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDcwOTcwNDQ0MTEwNTQ0NjM2NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA3MDk3MDQ0NDExMDU0NDYzNjU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 27 22:54:29 compute-0 nova_compute[185650]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDcwOTcwNDQ0MTEwNTQ0NjM2NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA3MDk3MDQ0NDExMDU0NDYzNjU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNzA5NzA0NDQxMTA1NDQ2MzY1PT0tLQo=',user_id='7387204f74504e288ed7a5dee73f5083',uuid=d2c3fc6f-7629-469b-be68-8fe07acabe0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.940 185654 DEBUG nova.network.os_vif_util [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "2083900f-b759-4c97-8c34-5ad3832f0446", "address": "fa:16:3e:27:7c:56", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2083900f-b7", "ovs_interfaceid": "2083900f-b759-4c97-8c34-5ad3832f0446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.940 185654 DEBUG nova.network.os_vif_util [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:7c:56,bridge_name='br-int',has_traffic_filtering=True,id=2083900f-b759-4c97-8c34-5ad3832f0446,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2083900f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.941 185654 DEBUG os_vif [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:7c:56,bridge_name='br-int',has_traffic_filtering=True,id=2083900f-b759-4c97-8c34-5ad3832f0446,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2083900f-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.944 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.945 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2083900f-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.946 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.948 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.950 185654 INFO os_vif [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:7c:56,bridge_name='br-int',has_traffic_filtering=True,id=2083900f-b759-4c97-8c34-5ad3832f0446,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2083900f-b7')
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.951 185654 INFO nova.virt.libvirt.driver [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Deleting instance files /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f_del
Jan 27 22:54:29 compute-0 nova_compute[185650]: 2026-01-27 22:54:29.952 185654 INFO nova.virt.libvirt.driver [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Deletion of /var/lib/nova/instances/d2c3fc6f-7629-469b-be68-8fe07acabe0f_del complete
Jan 27 22:54:30 compute-0 nova_compute[185650]: 2026-01-27 22:54:30.014 185654 DEBUG nova.virt.libvirt.host [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 27 22:54:30 compute-0 nova_compute[185650]: 2026-01-27 22:54:30.015 185654 INFO nova.virt.libvirt.host [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] UEFI support detected
Jan 27 22:54:30 compute-0 nova_compute[185650]: 2026-01-27 22:54:30.017 185654 INFO nova.compute.manager [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Took 0.37 seconds to destroy the instance on the hypervisor.
Jan 27 22:54:30 compute-0 nova_compute[185650]: 2026-01-27 22:54:30.018 185654 DEBUG oslo.service.loopingcall [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 22:54:30 compute-0 nova_compute[185650]: 2026-01-27 22:54:30.018 185654 DEBUG nova.compute.manager [-] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 22:54:30 compute-0 nova_compute[185650]: 2026-01-27 22:54:30.019 185654 DEBUG nova.network.neutron [-] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 22:54:30 compute-0 nova_compute[185650]: 2026-01-27 22:54:30.206 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:30 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 22:54:29.939 185654 DEBUG nova.virt.libvirt.vif [None req-95d57237-51 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 22:54:31 compute-0 nova_compute[185650]: 2026-01-27 22:54:31.075 185654 DEBUG nova.compute.manager [req-6617ef76-9820-4f3b-8f45-91d44402f9f3 req-6d94932f-fb50-4ce4-a57c-406f16504552 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Received event network-vif-unplugged-2083900f-b759-4c97-8c34-5ad3832f0446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:54:31 compute-0 nova_compute[185650]: 2026-01-27 22:54:31.075 185654 DEBUG oslo_concurrency.lockutils [req-6617ef76-9820-4f3b-8f45-91d44402f9f3 req-6d94932f-fb50-4ce4-a57c-406f16504552 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:54:31 compute-0 nova_compute[185650]: 2026-01-27 22:54:31.076 185654 DEBUG oslo_concurrency.lockutils [req-6617ef76-9820-4f3b-8f45-91d44402f9f3 req-6d94932f-fb50-4ce4-a57c-406f16504552 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:54:31 compute-0 nova_compute[185650]: 2026-01-27 22:54:31.076 185654 DEBUG oslo_concurrency.lockutils [req-6617ef76-9820-4f3b-8f45-91d44402f9f3 req-6d94932f-fb50-4ce4-a57c-406f16504552 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:54:31 compute-0 nova_compute[185650]: 2026-01-27 22:54:31.076 185654 DEBUG nova.compute.manager [req-6617ef76-9820-4f3b-8f45-91d44402f9f3 req-6d94932f-fb50-4ce4-a57c-406f16504552 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] No waiting events found dispatching network-vif-unplugged-2083900f-b759-4c97-8c34-5ad3832f0446 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 22:54:31 compute-0 nova_compute[185650]: 2026-01-27 22:54:31.076 185654 DEBUG nova.compute.manager [req-6617ef76-9820-4f3b-8f45-91d44402f9f3 req-6d94932f-fb50-4ce4-a57c-406f16504552 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Received event network-vif-unplugged-2083900f-b759-4c97-8c34-5ad3832f0446 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 22:54:31 compute-0 openstack_network_exporter[204648]: ERROR   22:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:54:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:54:31 compute-0 openstack_network_exporter[204648]: ERROR   22:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:54:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:54:31 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:31.857 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:41:f8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '26:ae:8e:b8:80:28'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:54:31 compute-0 nova_compute[185650]: 2026-01-27 22:54:31.857 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:31 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:31.858 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 22:54:32 compute-0 nova_compute[185650]: 2026-01-27 22:54:32.506 185654 DEBUG nova.network.neutron [-] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:54:32 compute-0 nova_compute[185650]: 2026-01-27 22:54:32.532 185654 INFO nova.compute.manager [-] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Took 2.51 seconds to deallocate network for instance.
Jan 27 22:54:32 compute-0 nova_compute[185650]: 2026-01-27 22:54:32.573 185654 DEBUG oslo_concurrency.lockutils [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:54:32 compute-0 nova_compute[185650]: 2026-01-27 22:54:32.574 185654 DEBUG oslo_concurrency.lockutils [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:54:32 compute-0 nova_compute[185650]: 2026-01-27 22:54:32.688 185654 DEBUG nova.compute.provider_tree [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:54:32 compute-0 nova_compute[185650]: 2026-01-27 22:54:32.704 185654 DEBUG nova.scheduler.client.report [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:54:32 compute-0 nova_compute[185650]: 2026-01-27 22:54:32.733 185654 DEBUG oslo_concurrency.lockutils [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:54:32 compute-0 nova_compute[185650]: 2026-01-27 22:54:32.762 185654 INFO nova.scheduler.client.report [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Deleted allocations for instance d2c3fc6f-7629-469b-be68-8fe07acabe0f
Jan 27 22:54:32 compute-0 nova_compute[185650]: 2026-01-27 22:54:32.829 185654 DEBUG oslo_concurrency.lockutils [None req-95d57237-51e0-461a-8d61-119bc8990a0f 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.166 185654 DEBUG nova.compute.manager [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Received event network-vif-plugged-2083900f-b759-4c97-8c34-5ad3832f0446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.167 185654 DEBUG oslo_concurrency.lockutils [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.167 185654 DEBUG oslo_concurrency.lockutils [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.168 185654 DEBUG oslo_concurrency.lockutils [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "d2c3fc6f-7629-469b-be68-8fe07acabe0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.169 185654 DEBUG nova.compute.manager [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] No waiting events found dispatching network-vif-plugged-2083900f-b759-4c97-8c34-5ad3832f0446 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.169 185654 WARNING nova.compute.manager [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Received unexpected event network-vif-plugged-2083900f-b759-4c97-8c34-5ad3832f0446 for instance with vm_state deleted and task_state None.
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.170 185654 DEBUG nova.compute.manager [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Received event network-changed-2083900f-b759-4c97-8c34-5ad3832f0446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.171 185654 DEBUG nova.compute.manager [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Refreshing instance network info cache due to event network-changed-2083900f-b759-4c97-8c34-5ad3832f0446. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.171 185654 DEBUG oslo_concurrency.lockutils [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.172 185654 DEBUG oslo_concurrency.lockutils [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.173 185654 DEBUG nova.network.neutron [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Refreshing network info cache for port 2083900f-b759-4c97-8c34-5ad3832f0446 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 22:54:33 compute-0 nova_compute[185650]: 2026-01-27 22:54:33.338 185654 DEBUG nova.network.neutron [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 22:54:33 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:54:33.860 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:54:34 compute-0 nova_compute[185650]: 2026-01-27 22:54:34.665 185654 DEBUG nova.network.neutron [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 27 22:54:34 compute-0 nova_compute[185650]: 2026-01-27 22:54:34.667 185654 DEBUG oslo_concurrency.lockutils [req-40606a6b-87cd-47b7-9a7e-3931e192728e req-83152b31-e3da-47b6-8a8e-a446608e5929 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-d2c3fc6f-7629-469b-be68-8fe07acabe0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:54:34 compute-0 nova_compute[185650]: 2026-01-27 22:54:34.947 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:35 compute-0 nova_compute[185650]: 2026-01-27 22:54:35.209 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:37 compute-0 podman[243199]: 2026-01-27 22:54:37.403443515 +0000 UTC m=+0.085066128 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true)
Jan 27 22:54:37 compute-0 podman[243198]: 2026-01-27 22:54:37.408132239 +0000 UTC m=+0.091188969 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 22:54:39 compute-0 nova_compute[185650]: 2026-01-27 22:54:39.950 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:40 compute-0 nova_compute[185650]: 2026-01-27 22:54:40.211 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:42 compute-0 podman[243238]: 2026-01-27 22:54:42.386946739 +0000 UTC m=+0.081319446 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:54:44 compute-0 podman[243262]: 2026-01-27 22:54:44.387895689 +0000 UTC m=+0.079701454 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 27 22:54:44 compute-0 nova_compute[185650]: 2026-01-27 22:54:44.923 185654 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769554469.921264, d2c3fc6f-7629-469b-be68-8fe07acabe0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:54:44 compute-0 nova_compute[185650]: 2026-01-27 22:54:44.924 185654 INFO nova.compute.manager [-] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] VM Stopped (Lifecycle Event)
Jan 27 22:54:44 compute-0 nova_compute[185650]: 2026-01-27 22:54:44.939 185654 DEBUG nova.compute.manager [None req-e7623ae9-e072-4b10-9c57-10c9886e7071 - - - - - -] [instance: d2c3fc6f-7629-469b-be68-8fe07acabe0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:54:44 compute-0 nova_compute[185650]: 2026-01-27 22:54:44.952 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:45 compute-0 nova_compute[185650]: 2026-01-27 22:54:45.214 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:46 compute-0 podman[243282]: 2026-01-27 22:54:46.395588437 +0000 UTC m=+0.090494322 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=kepler, vcs-type=git, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, release=1214.1726694543)
Jan 27 22:54:46 compute-0 podman[243283]: 2026-01-27 22:54:46.415089338 +0000 UTC m=+0.110894437 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 22:54:49 compute-0 nova_compute[185650]: 2026-01-27 22:54:49.954 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:50 compute-0 nova_compute[185650]: 2026-01-27 22:54:50.217 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:52 compute-0 nova_compute[185650]: 2026-01-27 22:54:52.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:54:54 compute-0 podman[243327]: 2026-01-27 22:54:54.366929132 +0000 UTC m=+0.070134408 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:54:54 compute-0 nova_compute[185650]: 2026-01-27 22:54:54.956 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:55 compute-0 nova_compute[185650]: 2026-01-27 22:54:55.005 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:54:55 compute-0 nova_compute[185650]: 2026-01-27 22:54:55.006 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:54:55 compute-0 nova_compute[185650]: 2026-01-27 22:54:55.006 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:54:55 compute-0 nova_compute[185650]: 2026-01-27 22:54:55.006 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 22:54:55 compute-0 nova_compute[185650]: 2026-01-27 22:54:55.042 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 22:54:55 compute-0 nova_compute[185650]: 2026-01-27 22:54:55.220 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:54:56 compute-0 podman[243351]: 2026-01-27 22:54:56.368659761 +0000 UTC m=+0.075429819 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, release=1755695350)
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.557 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.594 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Triggering sync for uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.594 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Triggering sync for uuid dd624b81-38f5-46aa-881b-ca66ace64fd3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.595 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Triggering sync for uuid 5409358c-78dc-4761-841a-7f453c6209fb _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.595 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.595 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.595 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "dd624b81-38f5-46aa-881b-ca66ace64fd3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.596 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.597 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "5409358c-78dc-4761-841a-7f453c6209fb" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.597 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "5409358c-78dc-4761-841a-7f453c6209fb" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.646 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.649 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:54:56 compute-0 nova_compute[185650]: 2026-01-27 22:54:56.669 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "5409358c-78dc-4761-841a-7f453c6209fb" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:54:57 compute-0 nova_compute[185650]: 2026-01-27 22:54:57.031 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:54:57 compute-0 nova_compute[185650]: 2026-01-27 22:54:57.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:54:58 compute-0 nova_compute[185650]: 2026-01-27 22:54:58.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:54:58 compute-0 nova_compute[185650]: 2026-01-27 22:54:58.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:54:59 compute-0 nova_compute[185650]: 2026-01-27 22:54:59.534 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:54:59 compute-0 nova_compute[185650]: 2026-01-27 22:54:59.534 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:54:59 compute-0 nova_compute[185650]: 2026-01-27 22:54:59.534 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:54:59 compute-0 podman[201529]: time="2026-01-27T22:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:54:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:54:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 27 22:54:59 compute-0 nova_compute[185650]: 2026-01-27 22:54:59.958 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.222 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.733 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Updating instance_info_cache with network_info: [{"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.753 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.754 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.754 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.755 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.756 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.778 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.778 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.779 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.780 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.865 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.960 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:00 compute-0 nova_compute[185650]: 2026-01-27 22:55:00.961 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.025 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.026 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.082 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.084 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.140 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.147 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.201 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.202 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.261 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.263 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.364 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.365 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:01 compute-0 openstack_network_exporter[204648]: ERROR   22:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:55:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:55:01 compute-0 openstack_network_exporter[204648]: ERROR   22:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:55:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.451 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.457 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.543 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.545 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.626 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.627 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.704 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.706 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:55:01 compute-0 nova_compute[185650]: 2026-01-27 22:55:01.762 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.129 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.131 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4740MB free_disk=72.37659454345703GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.131 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.131 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.323 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.324 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance dd624b81-38f5-46aa-881b-ca66ace64fd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.324 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 5409358c-78dc-4761-841a-7f453c6209fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.325 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.326 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.454 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.478 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.498 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.498 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.499 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:55:02 compute-0 nova_compute[185650]: 2026-01-27 22:55:02.499 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 22:55:03 compute-0 nova_compute[185650]: 2026-01-27 22:55:03.506 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:55:03 compute-0 nova_compute[185650]: 2026-01-27 22:55:03.506 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:55:03 compute-0 nova_compute[185650]: 2026-01-27 22:55:03.529 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:55:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:55:04.145 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:55:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:55:04.145 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:55:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:55:04.146 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:55:04 compute-0 ovn_controller[98048]: 2026-01-27T22:55:04Z|00053|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 27 22:55:04 compute-0 nova_compute[185650]: 2026-01-27 22:55:04.962 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:05 compute-0 nova_compute[185650]: 2026-01-27 22:55:05.226 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:08 compute-0 podman[243409]: 2026-01-27 22:55:08.377253644 +0000 UTC m=+0.071244815 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 22:55:08 compute-0 podman[243410]: 2026-01-27 22:55:08.379279105 +0000 UTC m=+0.079537643 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, managed_by=edpm_ansible)
Jan 27 22:55:09 compute-0 nova_compute[185650]: 2026-01-27 22:55:09.965 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:10 compute-0 nova_compute[185650]: 2026-01-27 22:55:10.227 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:13 compute-0 podman[243447]: 2026-01-27 22:55:13.360742841 +0000 UTC m=+0.063373486 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:55:14 compute-0 podman[243469]: 2026-01-27 22:55:14.792756235 +0000 UTC m=+0.110257727 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 22:55:14 compute-0 nova_compute[185650]: 2026-01-27 22:55:14.967 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:15 compute-0 nova_compute[185650]: 2026-01-27 22:55:15.230 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:17 compute-0 podman[243488]: 2026-01-27 22:55:17.37511886 +0000 UTC m=+0.078078858 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1214.1726694543, config_id=kepler, io.buildah.version=1.29.0, version=9.4, release-0.7.12=, name=ubi9, vendor=Red Hat, Inc.)
Jan 27 22:55:17 compute-0 podman[243489]: 2026-01-27 22:55:17.410755678 +0000 UTC m=+0.110387431 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:55:19 compute-0 nova_compute[185650]: 2026-01-27 22:55:19.969 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:20 compute-0 nova_compute[185650]: 2026-01-27 22:55:20.232 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:24 compute-0 nova_compute[185650]: 2026-01-27 22:55:24.971 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:25 compute-0 nova_compute[185650]: 2026-01-27 22:55:25.235 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:25 compute-0 podman[243535]: 2026-01-27 22:55:25.361680732 +0000 UTC m=+0.059714735 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 22:55:27 compute-0 podman[243556]: 2026-01-27 22:55:27.376876532 +0000 UTC m=+0.075882842 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Jan 27 22:55:29 compute-0 podman[201529]: time="2026-01-27T22:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:55:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:55:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 27 22:55:29 compute-0 nova_compute[185650]: 2026-01-27 22:55:29.974 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:30 compute-0 nova_compute[185650]: 2026-01-27 22:55:30.238 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:31 compute-0 openstack_network_exporter[204648]: ERROR   22:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:55:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:55:31 compute-0 openstack_network_exporter[204648]: ERROR   22:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:55:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:55:34 compute-0 nova_compute[185650]: 2026-01-27 22:55:34.976 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:35 compute-0 nova_compute[185650]: 2026-01-27 22:55:35.240 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.106 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.107 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.116 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5409358c-78dc-4761-841a-7f453c6209fb', 'name': 'vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.121 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '344c74c3-95d6-4f19-993f-b4a89c9d074b', 'name': 'test_0', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.125 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dd624b81-38f5-46aa-881b-ca66ace64fd3', 'name': 'vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.125 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c646060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c646060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T22:55:38.126354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.130 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.133 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.136 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.137 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.137 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.137 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.138 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.138 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.138 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.138 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T22:55:38.138186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.138 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.139 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.139 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.139 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.139 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.139 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.140 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.140 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.140 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.140 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.141 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T22:55:38.139851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.140 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.141 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.141 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.141 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T22:55:38.141510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.239 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 2048805649 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.239 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 9512100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.239 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.311 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 1982773015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.312 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 11972381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.312 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.389 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 1883856140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.390 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 12652797 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.390 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.392 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.392 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.393 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.393 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.393 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T22:55:38.393889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.395 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.395 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes volume: 2130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.396 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.398 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.398 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.400 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T22:55:38.398863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.400 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.400 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.400 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.401 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.401 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.401 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.402 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.402 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.403 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.404 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.404 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T22:55:38.404546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.405 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.405 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.406 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.407 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T22:55:38.407663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.432 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/cpu volume: 37250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.464 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/cpu volume: 41010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.502 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/cpu volume: 34220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.503 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.504 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.504 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.505 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/memory.usage volume: 48.734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.505 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.505 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.506 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T22:55:38.503922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T22:55:38.504863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T22:55:38.506066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T22:55:38.507115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.537 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.537 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.537 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.567 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.568 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.568 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.599 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.599 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.600 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.601 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645490>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645490>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.601 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.601 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T22:55:38.601413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.602 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.602 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.602 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.602 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.602 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.603 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.603 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.604 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.604 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.604 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.604 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 669467296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.604 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 92088857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.604 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 79077409 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.605 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 603707572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.605 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 113814738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.605 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 101138361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.605 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 587344116 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.606 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 100532473 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T22:55:38.604355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.606 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.latency volume: 196826454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.607 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.607 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.607 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.608 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.608 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.608 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T22:55:38.607788) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.609 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.609 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.609 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.609 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.609 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.610 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.610 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.610 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.610 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.610 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.610 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.610 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.610 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.611 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T22:55:38.610615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.611 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.611 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.611 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.611 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.611 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.612 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.612 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.612 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.612 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T22:55:38.612114) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.612 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.612 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.613 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.613 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.613 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T22:55:38.613626) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.614 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.614 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.614 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.614 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.615 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.615 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.615 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.616 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.616 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.616 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.616 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.617 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T22:55:38.616264) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.617 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.617 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.617 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.617 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.618 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.618 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.618 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.618 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.618 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.619 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.619 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T22:55:38.618888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.619 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.619 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.619 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.620 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.620 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.620 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.620 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.620 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T22:55:38.620172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.621 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.621 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.621 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.622 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.622 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.622 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T22:55:38.621530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.623 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T22:55:38.622972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.623 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.623 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.624 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.624 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.624 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.624 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T22:55:38.624294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.625 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.625 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.625 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.625 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.625 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.626 14 DEBUG ceilometer.compute.pollsters [-] dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.626 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:55:38.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:55:39 compute-0 podman[243578]: 2026-01-27 22:55:39.396160255 +0000 UTC m=+0.079715948 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 22:55:39 compute-0 podman[243579]: 2026-01-27 22:55:39.399460908 +0000 UTC m=+0.076087907 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.build-date=20260126)
Jan 27 22:55:39 compute-0 nova_compute[185650]: 2026-01-27 22:55:39.979 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:40 compute-0 nova_compute[185650]: 2026-01-27 22:55:40.243 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:44 compute-0 podman[243614]: 2026-01-27 22:55:44.363036026 +0000 UTC m=+0.059892939 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:55:44 compute-0 nova_compute[185650]: 2026-01-27 22:55:44.981 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:45 compute-0 nova_compute[185650]: 2026-01-27 22:55:45.245 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:45 compute-0 podman[243639]: 2026-01-27 22:55:45.41418184 +0000 UTC m=+0.115392557 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi)
Jan 27 22:55:48 compute-0 podman[243658]: 2026-01-27 22:55:48.375534912 +0000 UTC m=+0.076199380 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, version=9.4, io.buildah.version=1.29.0, config_id=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Jan 27 22:55:48 compute-0 podman[243659]: 2026-01-27 22:55:48.410931134 +0000 UTC m=+0.103932219 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:55:49 compute-0 nova_compute[185650]: 2026-01-27 22:55:49.983 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:50 compute-0 nova_compute[185650]: 2026-01-27 22:55:50.248 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:54 compute-0 nova_compute[185650]: 2026-01-27 22:55:54.986 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:55 compute-0 nova_compute[185650]: 2026-01-27 22:55:55.251 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:55:56 compute-0 podman[243701]: 2026-01-27 22:55:56.383449507 +0000 UTC m=+0.084837257 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 22:55:56 compute-0 nova_compute[185650]: 2026-01-27 22:55:56.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:55:56 compute-0 nova_compute[185650]: 2026-01-27 22:55:56.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:55:57 compute-0 nova_compute[185650]: 2026-01-27 22:55:57.995 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:55:58 compute-0 podman[243725]: 2026-01-27 22:55:58.41455851 +0000 UTC m=+0.107126658 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Jan 27 22:55:58 compute-0 nova_compute[185650]: 2026-01-27 22:55:58.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:55:58 compute-0 nova_compute[185650]: 2026-01-27 22:55:58.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:55:58 compute-0 nova_compute[185650]: 2026-01-27 22:55:58.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:55:59 compute-0 nova_compute[185650]: 2026-01-27 22:55:59.166 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:55:59 compute-0 nova_compute[185650]: 2026-01-27 22:55:59.166 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:55:59 compute-0 nova_compute[185650]: 2026-01-27 22:55:59.167 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:55:59 compute-0 nova_compute[185650]: 2026-01-27 22:55:59.167 185654 DEBUG nova.objects.instance [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:55:59 compute-0 podman[201529]: time="2026-01-27T22:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:55:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:55:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 27 22:55:59 compute-0 nova_compute[185650]: 2026-01-27 22:55:59.990 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.182 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.198 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.199 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.199 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.200 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.200 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.200 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.223 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.224 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.224 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.224 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.253 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.309 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.397 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.398 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.472 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.473 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.535 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.536 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.594 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.603 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.665 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.666 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.735 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.736 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.799 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.800 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.883 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.892 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.961 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:00 compute-0 nova_compute[185650]: 2026-01-27 22:56:00.963 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.028 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.029 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.089 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.090 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.152 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:56:01 compute-0 openstack_network_exporter[204648]: ERROR   22:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:56:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:56:01 compute-0 openstack_network_exporter[204648]: ERROR   22:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:56:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.537 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.538 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4777MB free_disk=72.37659454345703GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.538 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.538 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.604 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.605 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance dd624b81-38f5-46aa-881b-ca66ace64fd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.605 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 5409358c-78dc-4761-841a-7f453c6209fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.605 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.605 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:56:01 compute-0 nova_compute[185650]: 2026-01-27 22:56:01.998 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:56:02 compute-0 nova_compute[185650]: 2026-01-27 22:56:02.011 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:56:02 compute-0 nova_compute[185650]: 2026-01-27 22:56:02.013 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:56:02 compute-0 nova_compute[185650]: 2026-01-27 22:56:02.013 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:56:03 compute-0 nova_compute[185650]: 2026-01-27 22:56:03.807 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:56:03 compute-0 nova_compute[185650]: 2026-01-27 22:56:03.808 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:56:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:04.146 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:56:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:04.146 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:56:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:04.147 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:56:04 compute-0 sshd-session[243745]: Connection reset by 147.185.132.94 port 57662 [preauth]
Jan 27 22:56:04 compute-0 nova_compute[185650]: 2026-01-27 22:56:04.994 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:05 compute-0 nova_compute[185650]: 2026-01-27 22:56:05.256 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:09 compute-0 nova_compute[185650]: 2026-01-27 22:56:09.996 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:10 compute-0 nova_compute[185650]: 2026-01-27 22:56:10.259 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:10 compute-0 podman[243784]: 2026-01-27 22:56:10.435352457 +0000 UTC m=+0.109180816 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:56:10 compute-0 podman[243783]: 2026-01-27 22:56:10.44639999 +0000 UTC m=+0.126928027 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 27 22:56:14 compute-0 podman[243818]: 2026-01-27 22:56:14.771208083 +0000 UTC m=+0.086712397 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 22:56:14 compute-0 nova_compute[185650]: 2026-01-27 22:56:14.999 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:15 compute-0 nova_compute[185650]: 2026-01-27 22:56:15.261 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:16 compute-0 podman[243842]: 2026-01-27 22:56:16.387583771 +0000 UTC m=+0.086306317 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 22:56:19 compute-0 podman[243861]: 2026-01-27 22:56:19.412726513 +0000 UTC m=+0.098622485 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, config_id=kepler, container_name=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.expose-services=)
Jan 27 22:56:19 compute-0 podman[243862]: 2026-01-27 22:56:19.419177975 +0000 UTC m=+0.111910278 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 27 22:56:20 compute-0 nova_compute[185650]: 2026-01-27 22:56:20.002 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:20 compute-0 nova_compute[185650]: 2026-01-27 22:56:20.265 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:25 compute-0 nova_compute[185650]: 2026-01-27 22:56:25.004 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:25 compute-0 nova_compute[185650]: 2026-01-27 22:56:25.267 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:27 compute-0 podman[243907]: 2026-01-27 22:56:27.366007964 +0000 UTC m=+0.067655082 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.613 185654 DEBUG oslo_concurrency.lockutils [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "dd624b81-38f5-46aa-881b-ca66ace64fd3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.614 185654 DEBUG oslo_concurrency.lockutils [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.614 185654 DEBUG oslo_concurrency.lockutils [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.615 185654 DEBUG oslo_concurrency.lockutils [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.615 185654 DEBUG oslo_concurrency.lockutils [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.616 185654 INFO nova.compute.manager [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Terminating instance
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.617 185654 DEBUG nova.compute.manager [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 22:56:27 compute-0 kernel: tapba4dd39b-aa (unregistering): left promiscuous mode
Jan 27 22:56:27 compute-0 NetworkManager[56600]: <info>  [1769554587.6651] device (tapba4dd39b-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.675 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:27 compute-0 ovn_controller[98048]: 2026-01-27T22:56:27Z|00054|binding|INFO|Releasing lport ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 from this chassis (sb_readonly=0)
Jan 27 22:56:27 compute-0 ovn_controller[98048]: 2026-01-27T22:56:27Z|00055|binding|INFO|Setting lport ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 down in Southbound
Jan 27 22:56:27 compute-0 ovn_controller[98048]: 2026-01-27T22:56:27Z|00056|binding|INFO|Removing iface tapba4dd39b-aa ovn-installed in OVS
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.679 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.693 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.692 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:77:d7 192.168.0.223'], port_security=['fa:16:3e:54:77:d7 192.168.0.223'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e3ismbxiivp3-2npykxfceygn-qfpmbakkd4ep-port-kyn5svl6qrpu', 'neutron:cidrs': '192.168.0.223/24', 'neutron:device_id': 'dd624b81-38f5-46aa-881b-ca66ace64fd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98f694e3-becc-413f-b42b-35a7171f7f96', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e3ismbxiivp3-2npykxfceygn-qfpmbakkd4ep-port-kyn5svl6qrpu', 'neutron:project_id': '8318d5a200d74e4386cf4972db015b75', 'neutron:revision_number': '4', 'neutron:security_group_ids': '597f1057-390b-408a-b8d0-705fb45de27b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d21d3e2-2f64-49c8-bca6-9efc66f5bd67, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=ba4dd39b-aafe-4664-a6e5-0f4eed30dc40) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.694 107302 INFO neutron.agent.ovn.metadata.agent [-] Port ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 in datapath 98f694e3-becc-413f-b42b-35a7171f7f96 unbound from our chassis
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.696 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98f694e3-becc-413f-b42b-35a7171f7f96
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.712 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[c10ec610-c23c-405d-a43f-0740ad8f81bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:56:27 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 27 22:56:27 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 24.486s CPU time.
Jan 27 22:56:27 compute-0 systemd-machined[157036]: Machine qemu-3-instance-00000003 terminated.
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.745 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[4d016fae-2824-4f0c-b8a3-68f8a9708e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.748 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[487f3f1d-ce03-4c36-9633-4e47717bb815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.784 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9f0ec1-95bd-4ea7-864f-cde0f3322d52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.800 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[68b9390c-fc4d-4057-bb43-4b9959b39c44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98f694e3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:25:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365000, 'reachable_time': 40409, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243944, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.817 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[c0231c53-a617-4692-a99c-94145af455ca]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365013, 'tstamp': 365013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243945, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365017, 'tstamp': 365017}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243945, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.818 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f694e3-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.820 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.825 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.826 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98f694e3-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.826 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.827 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98f694e3-b0, col_values=(('external_ids', {'iface-id': 'acacffcb-4de9-40c5-aeef-3e5766b557e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:56:27 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:27.827 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.900 185654 DEBUG nova.compute.manager [req-23be27b4-86f0-405a-9068-bd543d8571d8 req-54861e87-00e4-41b3-9cdc-f9b9f864b3e1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Received event network-vif-unplugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.900 185654 DEBUG oslo_concurrency.lockutils [req-23be27b4-86f0-405a-9068-bd543d8571d8 req-54861e87-00e4-41b3-9cdc-f9b9f864b3e1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.901 185654 DEBUG oslo_concurrency.lockutils [req-23be27b4-86f0-405a-9068-bd543d8571d8 req-54861e87-00e4-41b3-9cdc-f9b9f864b3e1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.901 185654 DEBUG oslo_concurrency.lockutils [req-23be27b4-86f0-405a-9068-bd543d8571d8 req-54861e87-00e4-41b3-9cdc-f9b9f864b3e1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.901 185654 DEBUG nova.compute.manager [req-23be27b4-86f0-405a-9068-bd543d8571d8 req-54861e87-00e4-41b3-9cdc-f9b9f864b3e1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] No waiting events found dispatching network-vif-unplugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.902 185654 DEBUG nova.compute.manager [req-23be27b4-86f0-405a-9068-bd543d8571d8 req-54861e87-00e4-41b3-9cdc-f9b9f864b3e1 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Received event network-vif-unplugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.906 185654 INFO nova.virt.libvirt.driver [-] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Instance destroyed successfully.
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.907 185654 DEBUG nova.objects.instance [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'resources' on Instance uuid dd624b81-38f5-46aa-881b-ca66ace64fd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.918 185654 DEBUG nova.virt.libvirt.vif [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T22:48:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-bxiivp3-2npykxfceygn-qfpmbakkd4ep-vnf-ztsky6llf24g',id=3,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T22:48:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='3b67098f-eb50-41e2-8c8a-348367561673'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-x9j1qa3e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T22:48:43Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTIwMzEyMTMyNTM3NjQ3MzgzNDQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjAzMTIxMzI1Mzc2NDczODM0ND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTIwMzEyMTMyNTM3NjQ3MzgzNDQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 27 22:56:27 compute-0 nova_compute[185650]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjAzMTIxMzI1Mzc2NDczODM0ND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTIwMzEyMTMyNTM3NjQ3MzgzNDQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yMDMxMjEzMjUzNzY0NzM4MzQ0PT0tLQo=',user_id='7387204f74504e288ed7a5dee73f5083',uuid=dd624b81-38f5-46aa-881b-ca66ace64fd3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.919 185654 DEBUG nova.network.os_vif_util [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "address": "fa:16:3e:54:77:d7", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.223", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba4dd39b-aa", "ovs_interfaceid": "ba4dd39b-aafe-4664-a6e5-0f4eed30dc40", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.919 185654 DEBUG nova.network.os_vif_util [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:77:d7,bridge_name='br-int',has_traffic_filtering=True,id=ba4dd39b-aafe-4664-a6e5-0f4eed30dc40,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba4dd39b-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.919 185654 DEBUG os_vif [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:77:d7,bridge_name='br-int',has_traffic_filtering=True,id=ba4dd39b-aafe-4664-a6e5-0f4eed30dc40,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba4dd39b-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.921 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.921 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba4dd39b-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.923 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.925 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.927 185654 INFO os_vif [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:77:d7,bridge_name='br-int',has_traffic_filtering=True,id=ba4dd39b-aafe-4664-a6e5-0f4eed30dc40,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba4dd39b-aa')
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.928 185654 INFO nova.virt.libvirt.driver [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Deleting instance files /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3_del
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.929 185654 INFO nova.virt.libvirt.driver [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Deletion of /var/lib/nova/instances/dd624b81-38f5-46aa-881b-ca66ace64fd3_del complete
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.987 185654 INFO nova.compute.manager [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Took 0.37 seconds to destroy the instance on the hypervisor.
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.987 185654 DEBUG oslo.service.loopingcall [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.988 185654 DEBUG nova.compute.manager [-] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 22:56:27 compute-0 nova_compute[185650]: 2026-01-27 22:56:27.988 185654 DEBUG nova.network.neutron [-] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 22:56:28 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 22:56:27.918 185654 DEBUG nova.virt.libvirt.vif [None req-4dc73713-d1 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 22:56:28 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:28.723 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:41:f8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '26:ae:8e:b8:80:28'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 22:56:28 compute-0 nova_compute[185650]: 2026-01-27 22:56:28.724 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:28 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:28.725 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 22:56:29 compute-0 podman[243968]: 2026-01-27 22:56:29.391410591 +0000 UTC m=+0.083950734 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, version=9.6, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=openstack_network_exporter, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, container_name=openstack_network_exporter)
Jan 27 22:56:29 compute-0 podman[201529]: time="2026-01-27T22:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:56:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:56:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 27 22:56:29 compute-0 nova_compute[185650]: 2026-01-27 22:56:29.979 185654 DEBUG nova.compute.manager [req-6f18a0d3-d473-4a0f-9e06-704b4e72ee41 req-daf42b3c-dcf5-45df-967a-c5c17941d882 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Received event network-changed-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:56:29 compute-0 nova_compute[185650]: 2026-01-27 22:56:29.980 185654 DEBUG nova.compute.manager [req-6f18a0d3-d473-4a0f-9e06-704b4e72ee41 req-daf42b3c-dcf5-45df-967a-c5c17941d882 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Refreshing instance network info cache due to event network-changed-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 22:56:29 compute-0 nova_compute[185650]: 2026-01-27 22:56:29.980 185654 DEBUG oslo_concurrency.lockutils [req-6f18a0d3-d473-4a0f-9e06-704b4e72ee41 req-daf42b3c-dcf5-45df-967a-c5c17941d882 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:56:29 compute-0 nova_compute[185650]: 2026-01-27 22:56:29.981 185654 DEBUG oslo_concurrency.lockutils [req-6f18a0d3-d473-4a0f-9e06-704b4e72ee41 req-daf42b3c-dcf5-45df-967a-c5c17941d882 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:56:29 compute-0 nova_compute[185650]: 2026-01-27 22:56:29.981 185654 DEBUG nova.network.neutron [req-6f18a0d3-d473-4a0f-9e06-704b4e72ee41 req-daf42b3c-dcf5-45df-967a-c5c17941d882 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Refreshing network info cache for port ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.059 185654 DEBUG nova.compute.manager [req-df8e5666-1f9c-4a27-b815-528fb334df36 req-09c1adb1-0c3b-43b0-958a-550049da8660 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Received event network-vif-plugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.060 185654 DEBUG oslo_concurrency.lockutils [req-df8e5666-1f9c-4a27-b815-528fb334df36 req-09c1adb1-0c3b-43b0-958a-550049da8660 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.060 185654 DEBUG oslo_concurrency.lockutils [req-df8e5666-1f9c-4a27-b815-528fb334df36 req-09c1adb1-0c3b-43b0-958a-550049da8660 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.061 185654 DEBUG oslo_concurrency.lockutils [req-df8e5666-1f9c-4a27-b815-528fb334df36 req-09c1adb1-0c3b-43b0-958a-550049da8660 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.062 185654 DEBUG nova.compute.manager [req-df8e5666-1f9c-4a27-b815-528fb334df36 req-09c1adb1-0c3b-43b0-958a-550049da8660 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] No waiting events found dispatching network-vif-plugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.063 185654 WARNING nova.compute.manager [req-df8e5666-1f9c-4a27-b815-528fb334df36 req-09c1adb1-0c3b-43b0-958a-550049da8660 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Received unexpected event network-vif-plugged-ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 for instance with vm_state active and task_state deleting.
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.134 185654 INFO nova.network.neutron [req-6f18a0d3-d473-4a0f-9e06-704b4e72ee41 req-daf42b3c-dcf5-45df-967a-c5c17941d882 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Port ba4dd39b-aafe-4664-a6e5-0f4eed30dc40 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.135 185654 DEBUG nova.network.neutron [req-6f18a0d3-d473-4a0f-9e06-704b4e72ee41 req-daf42b3c-dcf5-45df-967a-c5c17941d882 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.171 185654 DEBUG oslo_concurrency.lockutils [req-6f18a0d3-d473-4a0f-9e06-704b4e72ee41 req-daf42b3c-dcf5-45df-967a-c5c17941d882 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-dd624b81-38f5-46aa-881b-ca66ace64fd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.269 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.552 185654 DEBUG nova.network.neutron [-] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.566 185654 INFO nova.compute.manager [-] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Took 2.58 seconds to deallocate network for instance.
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.599 185654 DEBUG oslo_concurrency.lockutils [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.599 185654 DEBUG oslo_concurrency.lockutils [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.710 185654 DEBUG nova.compute.provider_tree [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:56:30 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:56:30.727 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.730 185654 DEBUG nova.scheduler.client.report [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.751 185654 DEBUG oslo_concurrency.lockutils [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.773 185654 INFO nova.scheduler.client.report [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Deleted allocations for instance dd624b81-38f5-46aa-881b-ca66ace64fd3
Jan 27 22:56:30 compute-0 nova_compute[185650]: 2026-01-27 22:56:30.826 185654 DEBUG oslo_concurrency.lockutils [None req-4dc73713-d139-4900-bd89-e9f9c47ebc15 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "dd624b81-38f5-46aa-881b-ca66ace64fd3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:56:31 compute-0 openstack_network_exporter[204648]: ERROR   22:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:56:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:56:31 compute-0 openstack_network_exporter[204648]: ERROR   22:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:56:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:56:32 compute-0 nova_compute[185650]: 2026-01-27 22:56:32.924 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:35 compute-0 nova_compute[185650]: 2026-01-27 22:56:35.271 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:37 compute-0 nova_compute[185650]: 2026-01-27 22:56:37.926 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:40 compute-0 nova_compute[185650]: 2026-01-27 22:56:40.275 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:41 compute-0 podman[243991]: 2026-01-27 22:56:41.422874723 +0000 UTC m=+0.109585366 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 22:56:41 compute-0 podman[243992]: 2026-01-27 22:56:41.425248956 +0000 UTC m=+0.104193003 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 27 22:56:42 compute-0 nova_compute[185650]: 2026-01-27 22:56:42.905 185654 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769554587.9034874, dd624b81-38f5-46aa-881b-ca66ace64fd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:56:42 compute-0 nova_compute[185650]: 2026-01-27 22:56:42.905 185654 INFO nova.compute.manager [-] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] VM Stopped (Lifecycle Event)
Jan 27 22:56:42 compute-0 nova_compute[185650]: 2026-01-27 22:56:42.928 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:42 compute-0 nova_compute[185650]: 2026-01-27 22:56:42.930 185654 DEBUG nova.compute.manager [None req-b40f00ee-d89c-43e7-82a4-d3bb81005a0b - - - - - -] [instance: dd624b81-38f5-46aa-881b-ca66ace64fd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:56:45 compute-0 nova_compute[185650]: 2026-01-27 22:56:45.279 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:45 compute-0 podman[244027]: 2026-01-27 22:56:45.408971716 +0000 UTC m=+0.099158628 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:56:47 compute-0 podman[244051]: 2026-01-27 22:56:47.43122053 +0000 UTC m=+0.121850032 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 22:56:47 compute-0 nova_compute[185650]: 2026-01-27 22:56:47.931 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:50 compute-0 nova_compute[185650]: 2026-01-27 22:56:50.283 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:50 compute-0 podman[244073]: 2026-01-27 22:56:50.390621653 +0000 UTC m=+0.083994785 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, version=9.4, architecture=x86_64, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, build-date=2024-09-18T21:23:30, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 22:56:50 compute-0 podman[244074]: 2026-01-27 22:56:50.414142128 +0000 UTC m=+0.107476929 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:56:52 compute-0 nova_compute[185650]: 2026-01-27 22:56:52.935 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:55 compute-0 nova_compute[185650]: 2026-01-27 22:56:55.286 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:56 compute-0 nova_compute[185650]: 2026-01-27 22:56:56.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:56:56 compute-0 nova_compute[185650]: 2026-01-27 22:56:56.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:56:57 compute-0 nova_compute[185650]: 2026-01-27 22:56:57.937 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:56:58 compute-0 podman[244119]: 2026-01-27 22:56:58.39926328 +0000 UTC m=+0.101643225 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:56:58 compute-0 nova_compute[185650]: 2026-01-27 22:56:58.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:56:58 compute-0 nova_compute[185650]: 2026-01-27 22:56:58.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:56:59 compute-0 nova_compute[185650]: 2026-01-27 22:56:59.618 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:56:59 compute-0 nova_compute[185650]: 2026-01-27 22:56:59.619 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:56:59 compute-0 nova_compute[185650]: 2026-01-27 22:56:59.619 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:56:59 compute-0 podman[201529]: time="2026-01-27T22:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:56:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:56:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Jan 27 22:57:00 compute-0 sshd-session[244141]: Accepted publickey for zuul from 38.102.83.151 port 33938 ssh2: RSA SHA256:ZuKoWm/C8Whnhgf9tPVFWdXLNeFqjD7XfMzDvbUlFFI
Jan 27 22:57:00 compute-0 systemd-logind[789]: New session 29 of user zuul.
Jan 27 22:57:00 compute-0 systemd[1]: Started Session 29 of User zuul.
Jan 27 22:57:00 compute-0 sshd-session[244141]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:57:00 compute-0 nova_compute[185650]: 2026-01-27 22:57:00.292 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:00 compute-0 podman[244143]: 2026-01-27 22:57:00.306250238 +0000 UTC m=+0.094853714 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Jan 27 22:57:00 compute-0 sudo[244338]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axtpdajesxsqymvttpsewabnpxijabep ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769554620.352942-58737-122890008372110/AnsiballZ_command.py'
Jan 27 22:57:00 compute-0 sudo[244338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:57:01 compute-0 nova_compute[185650]: 2026-01-27 22:57:01.082 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Updating instance_info_cache with network_info: [{"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:57:01 compute-0 python3[244340]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:57:01 compute-0 nova_compute[185650]: 2026-01-27 22:57:01.120 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:57:01 compute-0 nova_compute[185650]: 2026-01-27 22:57:01.122 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:57:01 compute-0 nova_compute[185650]: 2026-01-27 22:57:01.123 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:57:01 compute-0 nova_compute[185650]: 2026-01-27 22:57:01.124 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:57:01 compute-0 nova_compute[185650]: 2026-01-27 22:57:01.124 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:57:01 compute-0 nova_compute[185650]: 2026-01-27 22:57:01.125 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:57:01 compute-0 sudo[244338]: pam_unix(sudo:session): session closed for user root
Jan 27 22:57:01 compute-0 openstack_network_exporter[204648]: ERROR   22:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:57:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:57:01 compute-0 openstack_network_exporter[204648]: ERROR   22:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:57:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:57:01 compute-0 nova_compute[185650]: 2026-01-27 22:57:01.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.021 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.022 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.022 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.023 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.146 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.204 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.206 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.264 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.265 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.322 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.323 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.385 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.393 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.451 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.452 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.511 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.512 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.573 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.574 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.633 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:02 compute-0 ovn_controller[98048]: 2026-01-27T22:57:02Z|00057|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.940 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.987 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.988 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4927MB free_disk=72.39859390258789GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.989 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:02 compute-0 nova_compute[185650]: 2026-01-27 22:57:02.989 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:03 compute-0 nova_compute[185650]: 2026-01-27 22:57:03.123 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:57:03 compute-0 nova_compute[185650]: 2026-01-27 22:57:03.124 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 5409358c-78dc-4761-841a-7f453c6209fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:57:03 compute-0 nova_compute[185650]: 2026-01-27 22:57:03.124 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:57:03 compute-0 nova_compute[185650]: 2026-01-27 22:57:03.124 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:57:03 compute-0 nova_compute[185650]: 2026-01-27 22:57:03.181 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:57:03 compute-0 nova_compute[185650]: 2026-01-27 22:57:03.201 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:57:03 compute-0 nova_compute[185650]: 2026-01-27 22:57:03.227 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:57:03 compute-0 nova_compute[185650]: 2026-01-27 22:57:03.227 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:57:04.147 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:57:04.148 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:57:04.149 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:04 compute-0 nova_compute[185650]: 2026-01-27 22:57:04.228 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:57:04 compute-0 nova_compute[185650]: 2026-01-27 22:57:04.229 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:57:04 compute-0 nova_compute[185650]: 2026-01-27 22:57:04.988 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:57:05 compute-0 nova_compute[185650]: 2026-01-27 22:57:05.292 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:07 compute-0 nova_compute[185650]: 2026-01-27 22:57:07.944 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:10 compute-0 nova_compute[185650]: 2026-01-27 22:57:10.295 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:12 compute-0 podman[244404]: 2026-01-27 22:57:12.403756597 +0000 UTC m=+0.107862057 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 27 22:57:12 compute-0 podman[244405]: 2026-01-27 22:57:12.40616229 +0000 UTC m=+0.109576269 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 27 22:57:12 compute-0 nova_compute[185650]: 2026-01-27 22:57:12.946 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:15 compute-0 nova_compute[185650]: 2026-01-27 22:57:15.298 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:16 compute-0 podman[244443]: 2026-01-27 22:57:16.372982992 +0000 UTC m=+0.069702601 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 22:57:17 compute-0 nova_compute[185650]: 2026-01-27 22:57:17.950 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.002 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5a1c83d6-db00-4f46-98d7-1b0c20b3bb82" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.003 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5a1c83d6-db00-4f46-98d7-1b0c20b3bb82" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.025 185654 DEBUG nova.compute.manager [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.096 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.096 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.109 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.110 185654 INFO nova.compute.claims [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Claim successful on node compute-0.ctlplane.example.com
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.243 185654 DEBUG nova.compute.provider_tree [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.261 185654 DEBUG nova.scheduler.client.report [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.283 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.285 185654 DEBUG nova.compute.manager [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.347 185654 DEBUG nova.compute.manager [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.363 185654 INFO nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 22:57:18 compute-0 podman[244467]: 2026-01-27 22:57:18.400202197 +0000 UTC m=+0.102853321 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.399 185654 DEBUG nova.compute.manager [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.474 185654 DEBUG nova.compute.manager [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.476 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.477 185654 INFO nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Creating image(s)
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.478 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "/var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.478 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.479 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.479 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "e14149cbf430203206ec68274e06fda5db052165" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:18 compute-0 nova_compute[185650]: 2026-01-27 22:57:18.480 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "e14149cbf430203206ec68274e06fda5db052165" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.607 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.669 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165.part --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.670 185654 DEBUG nova.virt.images [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] 9392a68e-fd47-48e2-86cd-5d13d7da9362 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.672 185654 DEBUG nova.privsep.utils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.672 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165.part /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.837 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165.part /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165.converted" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.841 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.896 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165.converted --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.898 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "e14149cbf430203206ec68274e06fda5db052165" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.911 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.971 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.972 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "e14149cbf430203206ec68274e06fda5db052165" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.973 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "e14149cbf430203206ec68274e06fda5db052165" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:19 compute-0 nova_compute[185650]: 2026-01-27 22:57:19.987 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.060 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.061 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165,backing_fmt=raw /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.099 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165,backing_fmt=raw /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.100 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "e14149cbf430203206ec68274e06fda5db052165" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.101 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.162 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e14149cbf430203206ec68274e06fda5db052165 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.164 185654 DEBUG nova.virt.disk.api [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Checking if we can resize image /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.166 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.230 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.231 185654 DEBUG nova.virt.disk.api [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Cannot resize image /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.231 185654 DEBUG nova.objects.instance [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'migration_context' on Instance uuid 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.253 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "/var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.253 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.254 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "/var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.267 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.300 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.329 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.330 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.330 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.340 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.394 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.395 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.435 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.eph0 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.437 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.437 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.489 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.489 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.490 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Ensure instance console log exists: /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.490 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.490 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.491 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.492 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T22:57:05Z,direct_url=<?>,disk_format='qcow2',id=9392a68e-fd47-48e2-86cd-5d13d7da9362,min_disk=0,min_ram=0,name='fvt_testing_image',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T22:57:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '9392a68e-fd47-48e2-86cd-5d13d7da9362'}], 'ephemerals': [{'size': 1, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.498 185654 WARNING nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.503 185654 DEBUG nova.virt.libvirt.host [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.504 185654 DEBUG nova.virt.libvirt.host [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.507 185654 DEBUG nova.virt.libvirt.host [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.507 185654 DEBUG nova.virt.libvirt.host [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.508 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.508 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T22:57:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='9f9be9e8-dad6-411d-8613-5eb6b9ec552e',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-27T22:57:05Z,direct_url=<?>,disk_format='qcow2',id=9392a68e-fd47-48e2-86cd-5d13d7da9362,min_disk=0,min_ram=0,name='fvt_testing_image',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-27T22:57:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.508 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.508 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.509 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.509 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.509 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.509 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.509 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.509 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.510 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.510 185654 DEBUG nova.virt.hardware [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.513 185654 DEBUG nova.objects.instance [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.527 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] End _get_guest_xml xml=<domain type="kvm">
Jan 27 22:57:20 compute-0 nova_compute[185650]:   <uuid>5a1c83d6-db00-4f46-98d7-1b0c20b3bb82</uuid>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   <name>instance-00000005</name>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   <memory>524288</memory>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   <vcpu>1</vcpu>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   <metadata>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <nova:name>fvt_testing_server</nova:name>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <nova:creationTime>2026-01-27 22:57:20</nova:creationTime>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <nova:flavor name="fvt_testing_flavor">
Jan 27 22:57:20 compute-0 nova_compute[185650]:         <nova:memory>512</nova:memory>
Jan 27 22:57:20 compute-0 nova_compute[185650]:         <nova:disk>1</nova:disk>
Jan 27 22:57:20 compute-0 nova_compute[185650]:         <nova:swap>0</nova:swap>
Jan 27 22:57:20 compute-0 nova_compute[185650]:         <nova:ephemeral>1</nova:ephemeral>
Jan 27 22:57:20 compute-0 nova_compute[185650]:         <nova:vcpus>1</nova:vcpus>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       </nova:flavor>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <nova:owner>
Jan 27 22:57:20 compute-0 nova_compute[185650]:         <nova:user uuid="7387204f74504e288ed7a5dee73f5083">admin</nova:user>
Jan 27 22:57:20 compute-0 nova_compute[185650]:         <nova:project uuid="8318d5a200d74e4386cf4972db015b75">admin</nova:project>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       </nova:owner>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <nova:root type="image" uuid="9392a68e-fd47-48e2-86cd-5d13d7da9362"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <nova:ports/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     </nova:instance>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   </metadata>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   <sysinfo type="smbios">
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <system>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <entry name="manufacturer">RDO</entry>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <entry name="product">OpenStack Compute</entry>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <entry name="serial">5a1c83d6-db00-4f46-98d7-1b0c20b3bb82</entry>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <entry name="uuid">5a1c83d6-db00-4f46-98d7-1b0c20b3bb82</entry>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <entry name="family">Virtual Machine</entry>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     </system>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   </sysinfo>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   <os>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <boot dev="hd"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <smbios mode="sysinfo"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   </os>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   <features>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <acpi/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <apic/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <vmcoreinfo/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   </features>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   <clock offset="utc">
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <timer name="hpet" present="no"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   </clock>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   <cpu mode="host-model" match="exact">
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   </cpu>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   <devices>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <target dev="vda" bus="virtio"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.eph0"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <target dev="vdb" bus="virtio"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <disk type="file" device="cdrom">
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.config"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <target dev="sda" bus="sata"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     </disk>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <serial type="pty">
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <log file="/var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/console.log" append="off"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     </serial>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <video>
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     </video>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <input type="tablet" bus="usb"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <rng model="virtio">
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <backend model="random">/dev/urandom</backend>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     </rng>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <controller type="usb" index="0"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     <memballoon model="virtio">
Jan 27 22:57:20 compute-0 nova_compute[185650]:       <stats period="10"/>
Jan 27 22:57:20 compute-0 nova_compute[185650]:     </memballoon>
Jan 27 22:57:20 compute-0 nova_compute[185650]:   </devices>
Jan 27 22:57:20 compute-0 nova_compute[185650]: </domain>
Jan 27 22:57:20 compute-0 nova_compute[185650]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.591 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.591 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.591 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 22:57:20 compute-0 nova_compute[185650]: 2026-01-27 22:57:20.592 185654 INFO nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Using config drive
Jan 27 22:57:21 compute-0 podman[244528]: 2026-01-27 22:57:21.403961183 +0000 UTC m=+0.090863168 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, architecture=x86_64, io.openshift.tags=base rhel9)
Jan 27 22:57:21 compute-0 podman[244529]: 2026-01-27 22:57:21.44513479 +0000 UTC m=+0.128173935 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 27 22:57:21 compute-0 nova_compute[185650]: 2026-01-27 22:57:21.610 185654 INFO nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Creating config drive at /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.config
Jan 27 22:57:21 compute-0 nova_compute[185650]: 2026-01-27 22:57:21.619 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqo8w4ls_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:57:21 compute-0 nova_compute[185650]: 2026-01-27 22:57:21.744 185654 DEBUG oslo_concurrency.processutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqo8w4ls_" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:57:21 compute-0 systemd-machined[157036]: New machine qemu-5-instance-00000005.
Jan 27 22:57:21 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.317 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769554642.3171384, 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.318 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] VM Resumed (Lifecycle Event)
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.322 185654 DEBUG nova.compute.manager [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.323 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.329 185654 INFO nova.virt.libvirt.driver [-] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Instance spawned successfully.
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.330 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.337 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.348 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.356 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.356 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.357 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.357 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.357 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.358 185654 DEBUG nova.virt.libvirt.driver [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.384 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.384 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769554642.3218756, 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.384 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] VM Started (Lifecycle Event)
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.412 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.420 185654 INFO nova.compute.manager [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Took 3.95 seconds to spawn the instance on the hypervisor.
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.421 185654 DEBUG nova.compute.manager [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.423 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.449 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.476 185654 INFO nova.compute.manager [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Took 4.41 seconds to build instance.
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.490 185654 DEBUG oslo_concurrency.lockutils [None req-ba21c881-6413-461a-a16a-afe7727e2448 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5a1c83d6-db00-4f46-98d7-1b0c20b3bb82" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:22 compute-0 nova_compute[185650]: 2026-01-27 22:57:22.953 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:23 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 22:57:23 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 22:57:25 compute-0 nova_compute[185650]: 2026-01-27 22:57:25.305 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:27 compute-0 nova_compute[185650]: 2026-01-27 22:57:27.954 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:29 compute-0 podman[244624]: 2026-01-27 22:57:29.391507357 +0000 UTC m=+0.081401085 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:57:29 compute-0 podman[201529]: time="2026-01-27T22:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:57:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:57:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 27 22:57:30 compute-0 nova_compute[185650]: 2026-01-27 22:57:30.306 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:31 compute-0 podman[244648]: 2026-01-27 22:57:31.39798789 +0000 UTC m=+0.103244233 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., version=9.6)
Jan 27 22:57:31 compute-0 openstack_network_exporter[204648]: ERROR   22:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:57:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:57:31 compute-0 openstack_network_exporter[204648]: ERROR   22:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:57:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:57:32 compute-0 nova_compute[185650]: 2026-01-27 22:57:32.955 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:35 compute-0 nova_compute[185650]: 2026-01-27 22:57:35.308 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:35 compute-0 nova_compute[185650]: 2026-01-27 22:57:35.775 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5a1c83d6-db00-4f46-98d7-1b0c20b3bb82" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:35 compute-0 nova_compute[185650]: 2026-01-27 22:57:35.776 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5a1c83d6-db00-4f46-98d7-1b0c20b3bb82" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:35 compute-0 nova_compute[185650]: 2026-01-27 22:57:35.777 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5a1c83d6-db00-4f46-98d7-1b0c20b3bb82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:35 compute-0 nova_compute[185650]: 2026-01-27 22:57:35.777 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5a1c83d6-db00-4f46-98d7-1b0c20b3bb82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:35 compute-0 nova_compute[185650]: 2026-01-27 22:57:35.778 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5a1c83d6-db00-4f46-98d7-1b0c20b3bb82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:35 compute-0 nova_compute[185650]: 2026-01-27 22:57:35.780 185654 INFO nova.compute.manager [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Terminating instance
Jan 27 22:57:35 compute-0 nova_compute[185650]: 2026-01-27 22:57:35.782 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "refresh_cache-5a1c83d6-db00-4f46-98d7-1b0c20b3bb82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:57:35 compute-0 nova_compute[185650]: 2026-01-27 22:57:35.782 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquired lock "refresh_cache-5a1c83d6-db00-4f46-98d7-1b0c20b3bb82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:57:35 compute-0 nova_compute[185650]: 2026-01-27 22:57:35.782 185654 DEBUG nova.network.neutron [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 22:57:35 compute-0 nova_compute[185650]: 2026-01-27 22:57:35.922 185654 DEBUG nova.network.neutron [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 22:57:36 compute-0 nova_compute[185650]: 2026-01-27 22:57:36.312 185654 DEBUG nova.network.neutron [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:57:36 compute-0 nova_compute[185650]: 2026-01-27 22:57:36.331 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Releasing lock "refresh_cache-5a1c83d6-db00-4f46-98d7-1b0c20b3bb82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:57:36 compute-0 nova_compute[185650]: 2026-01-27 22:57:36.331 185654 DEBUG nova.compute.manager [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 22:57:36 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 27 22:57:36 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 14.821s CPU time.
Jan 27 22:57:36 compute-0 systemd-machined[157036]: Machine qemu-5-instance-00000005 terminated.
Jan 27 22:57:36 compute-0 nova_compute[185650]: 2026-01-27 22:57:36.599 185654 INFO nova.virt.libvirt.driver [-] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Instance destroyed successfully.
Jan 27 22:57:36 compute-0 nova_compute[185650]: 2026-01-27 22:57:36.600 185654 DEBUG nova.objects.instance [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'resources' on Instance uuid 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:57:36 compute-0 nova_compute[185650]: 2026-01-27 22:57:36.617 185654 INFO nova.virt.libvirt.driver [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Deleting instance files /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82_del
Jan 27 22:57:36 compute-0 nova_compute[185650]: 2026-01-27 22:57:36.618 185654 INFO nova.virt.libvirt.driver [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Deletion of /var/lib/nova/instances/5a1c83d6-db00-4f46-98d7-1b0c20b3bb82_del complete
Jan 27 22:57:36 compute-0 nova_compute[185650]: 2026-01-27 22:57:36.678 185654 INFO nova.compute.manager [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Took 0.35 seconds to destroy the instance on the hypervisor.
Jan 27 22:57:36 compute-0 nova_compute[185650]: 2026-01-27 22:57:36.678 185654 DEBUG oslo.service.loopingcall [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 22:57:36 compute-0 nova_compute[185650]: 2026-01-27 22:57:36.679 185654 DEBUG nova.compute.manager [-] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 22:57:36 compute-0 nova_compute[185650]: 2026-01-27 22:57:36.679 185654 DEBUG nova.network.neutron [-] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 22:57:37 compute-0 nova_compute[185650]: 2026-01-27 22:57:37.630 185654 DEBUG nova.network.neutron [-] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 22:57:37 compute-0 nova_compute[185650]: 2026-01-27 22:57:37.646 185654 DEBUG nova.network.neutron [-] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:57:37 compute-0 nova_compute[185650]: 2026-01-27 22:57:37.660 185654 INFO nova.compute.manager [-] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Took 0.98 seconds to deallocate network for instance.
Jan 27 22:57:37 compute-0 nova_compute[185650]: 2026-01-27 22:57:37.696 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:57:37 compute-0 nova_compute[185650]: 2026-01-27 22:57:37.696 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:57:37 compute-0 nova_compute[185650]: 2026-01-27 22:57:37.791 185654 DEBUG nova.compute.provider_tree [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:57:37 compute-0 nova_compute[185650]: 2026-01-27 22:57:37.807 185654 DEBUG nova.scheduler.client.report [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:57:37 compute-0 nova_compute[185650]: 2026-01-27 22:57:37.829 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:37 compute-0 nova_compute[185650]: 2026-01-27 22:57:37.861 185654 INFO nova.scheduler.client.report [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Deleted allocations for instance 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82
Jan 27 22:57:37 compute-0 nova_compute[185650]: 2026-01-27 22:57:37.912 185654 DEBUG oslo_concurrency.lockutils [None req-698b25cf-e7e0-4476-b26d-b8f8db787c5a 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5a1c83d6-db00-4f46-98d7-1b0c20b3bb82" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:57:37 compute-0 nova_compute[185650]: 2026-01-27 22:57:37.957 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.107 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.108 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.117 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5409358c-78dc-4761-841a-7f453c6209fb', 'name': 'vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.120 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '344c74c3-95d6-4f19-993f-b4a89c9d074b', 'name': 'test_0', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.120 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c646060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.120 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c646060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T22:57:38.120955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.125 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.130 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.132 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.132 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.133 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.133 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.134 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.134 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.135 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.135 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.135 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T22:57:38.132922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T22:57:38.134558) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T22:57:38.136571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.218 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 2048805649 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.218 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 9512100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.218 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.296 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 1982773015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.297 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 11972381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.297 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.298 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.298 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.299 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes volume: 2214 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T22:57:38.298517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.299 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.300 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.300 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.300 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T22:57:38.300199) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.300 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.300 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.301 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.301 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.301 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.301 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.302 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.303 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T22:57:38.302924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.303 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.303 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.304 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.304 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T22:57:38.304440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.333 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/cpu volume: 38670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.360 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/cpu volume: 42370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.361 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.362 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.362 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.362 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/memory.usage volume: 48.734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.363 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T22:57:38.361586) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T22:57:38.362510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.364 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T22:57:38.363785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T22:57:38.364680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.390 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.390 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.390 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.415 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.416 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.416 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.417 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.417 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.417 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.417 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645490>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.417 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645490>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.417 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.417 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.418 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.418 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T22:57:38.417614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.418 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.418 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.418 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.418 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.419 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.419 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.419 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 669467296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.420 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 92088857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.420 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 79077409 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.420 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T22:57:38.419761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.420 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 603707572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.420 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 113814738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.421 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 101138361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.421 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.422 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.422 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T22:57:38.422168) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.422 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.422 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.423 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.423 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.423 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.423 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.424 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.424 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.424 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.424 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.425 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.425 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.425 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T22:57:38.424499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.426 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.426 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T22:57:38.425783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.426 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.427 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.427 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.427 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.427 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.427 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.428 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T22:57:38.426938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.428 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.428 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.428 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.428 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.429 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.429 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.429 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.429 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.429 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.430 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.430 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.430 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.430 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.431 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.431 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.431 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T22:57:38.428932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.431 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T22:57:38.430919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.431 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.432 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.432 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.432 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.433 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T22:57:38.432008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.433 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.433 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T22:57:38.433165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.434 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.434 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.434 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.435 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.435 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.435 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T22:57:38.434353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T22:57:38.435491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.435 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.436 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.436 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.436 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.436 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.437 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.437 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.438 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.439 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.439 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.439 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.439 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.439 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.439 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.439 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.439 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:57:38.439 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:57:40 compute-0 nova_compute[185650]: 2026-01-27 22:57:40.311 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:42 compute-0 nova_compute[185650]: 2026-01-27 22:57:42.960 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:43 compute-0 podman[244685]: 2026-01-27 22:57:43.379751648 +0000 UTC m=+0.068373981 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 27 22:57:43 compute-0 podman[244686]: 2026-01-27 22:57:43.392653148 +0000 UTC m=+0.082920913 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40)
Jan 27 22:57:45 compute-0 nova_compute[185650]: 2026-01-27 22:57:45.313 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:47 compute-0 podman[244721]: 2026-01-27 22:57:47.356379934 +0000 UTC m=+0.059580238 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 22:57:47 compute-0 nova_compute[185650]: 2026-01-27 22:57:47.963 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:49 compute-0 podman[244745]: 2026-01-27 22:57:49.405155898 +0000 UTC m=+0.090294211 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 27 22:57:50 compute-0 nova_compute[185650]: 2026-01-27 22:57:50.316 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:51 compute-0 nova_compute[185650]: 2026-01-27 22:57:51.597 185654 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769554656.594275, 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 22:57:51 compute-0 nova_compute[185650]: 2026-01-27 22:57:51.598 185654 INFO nova.compute.manager [-] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] VM Stopped (Lifecycle Event)
Jan 27 22:57:51 compute-0 nova_compute[185650]: 2026-01-27 22:57:51.681 185654 DEBUG nova.compute.manager [None req-4018ff88-4ca0-4cf0-9f1f-5bf465c4c08d - - - - - -] [instance: 5a1c83d6-db00-4f46-98d7-1b0c20b3bb82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 22:57:52 compute-0 podman[244765]: 2026-01-27 22:57:52.427427959 +0000 UTC m=+0.104983968 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543, distribution-scope=public, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, release-0.7.12=, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 22:57:52 compute-0 podman[244766]: 2026-01-27 22:57:52.445909101 +0000 UTC m=+0.130973513 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 22:57:52 compute-0 nova_compute[185650]: 2026-01-27 22:57:52.965 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:55 compute-0 nova_compute[185650]: 2026-01-27 22:57:55.319 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:57 compute-0 nova_compute[185650]: 2026-01-27 22:57:57.967 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:57:57 compute-0 nova_compute[185650]: 2026-01-27 22:57:57.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:57:57 compute-0 nova_compute[185650]: 2026-01-27 22:57:57.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:57:59 compute-0 podman[201529]: time="2026-01-27T22:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:57:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:57:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 27 22:57:59 compute-0 nova_compute[185650]: 2026-01-27 22:57:59.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:57:59 compute-0 nova_compute[185650]: 2026-01-27 22:57:59.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:57:59 compute-0 nova_compute[185650]: 2026-01-27 22:57:59.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 22:58:00 compute-0 nova_compute[185650]: 2026-01-27 22:58:00.321 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:00 compute-0 sshd-session[244150]: Received disconnect from 38.102.83.151 port 33938:11: disconnected by user
Jan 27 22:58:00 compute-0 podman[244808]: 2026-01-27 22:58:00.387962854 +0000 UTC m=+0.087829314 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 22:58:00 compute-0 sshd-session[244150]: Disconnected from user zuul 38.102.83.151 port 33938
Jan 27 22:58:00 compute-0 sshd-session[244141]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:58:00 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 27 22:58:00 compute-0 systemd[1]: session-29.scope: Consumed 1.017s CPU time.
Jan 27 22:58:00 compute-0 systemd-logind[789]: Session 29 logged out. Waiting for processes to exit.
Jan 27 22:58:00 compute-0 systemd-logind[789]: Removed session 29.
Jan 27 22:58:00 compute-0 nova_compute[185650]: 2026-01-27 22:58:00.629 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:58:00 compute-0 nova_compute[185650]: 2026-01-27 22:58:00.630 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:58:00 compute-0 nova_compute[185650]: 2026-01-27 22:58:00.631 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:58:00 compute-0 nova_compute[185650]: 2026-01-27 22:58:00.631 185654 DEBUG nova.objects.instance [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 22:58:01 compute-0 openstack_network_exporter[204648]: ERROR   22:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:58:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:58:01 compute-0 openstack_network_exporter[204648]: ERROR   22:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:58:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.107 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.120 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.120 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.121 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.121 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.121 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.121 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.121 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.144 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.144 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.144 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.144 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.224 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:58:02 compute-0 podman[244833]: 2026-01-27 22:58:02.27431828 +0000 UTC m=+0.081155947 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=)
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.284 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.285 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.363 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.364 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.453 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.455 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.521 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.530 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.588 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.589 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.648 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.650 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.723 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.725 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.799 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:58:02 compute-0 nova_compute[185650]: 2026-01-27 22:58:02.970 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.138 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.139 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4840MB free_disk=72.37123489379883GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.140 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.140 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.206 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.207 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 5409358c-78dc-4761-841a-7f453c6209fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.207 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.208 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.256 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.268 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.288 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:58:03 compute-0 nova_compute[185650]: 2026-01-27 22:58:03.289 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:58:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:58:04.148 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:58:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:58:04.148 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:58:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:58:04.149 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:58:05 compute-0 nova_compute[185650]: 2026-01-27 22:58:05.161 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:58:05 compute-0 nova_compute[185650]: 2026-01-27 22:58:05.324 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:05 compute-0 nova_compute[185650]: 2026-01-27 22:58:05.988 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:58:07 compute-0 nova_compute[185650]: 2026-01-27 22:58:07.975 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:10 compute-0 nova_compute[185650]: 2026-01-27 22:58:10.326 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:12 compute-0 sshd-session[244877]: Accepted publickey for zuul from 38.102.83.151 port 50012 ssh2: RSA SHA256:ZuKoWm/C8Whnhgf9tPVFWdXLNeFqjD7XfMzDvbUlFFI
Jan 27 22:58:12 compute-0 systemd-logind[789]: New session 30 of user zuul.
Jan 27 22:58:12 compute-0 systemd[1]: Started Session 30 of User zuul.
Jan 27 22:58:12 compute-0 sshd-session[244877]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 22:58:12 compute-0 nova_compute[185650]: 2026-01-27 22:58:12.979 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:13 compute-0 sudo[245054]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxgxdiatvbqvrrjtuoxommazakvxfmbr ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769554692.7377691-59498-235770801532167/AnsiballZ_command.py'
Jan 27 22:58:13 compute-0 sudo[245054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:58:13 compute-0 podman[245056]: 2026-01-27 22:58:13.508532487 +0000 UTC m=+0.067031552 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 27 22:58:13 compute-0 podman[245057]: 2026-01-27 22:58:13.519055855 +0000 UTC m=+0.076808760 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 22:58:13 compute-0 python3[245058]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:58:13 compute-0 sudo[245054]: pam_unix(sudo:session): session closed for user root
Jan 27 22:58:15 compute-0 nova_compute[185650]: 2026-01-27 22:58:15.328 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:17 compute-0 nova_compute[185650]: 2026-01-27 22:58:17.982 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:18 compute-0 podman[245130]: 2026-01-27 22:58:18.39060204 +0000 UTC m=+0.076515563 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:58:20 compute-0 nova_compute[185650]: 2026-01-27 22:58:20.331 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:20 compute-0 podman[245154]: 2026-01-27 22:58:20.411287027 +0000 UTC m=+0.106476754 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 27 22:58:21 compute-0 sudo[245345]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axdyudqlnvmjsqkvyltgkqavngfdejfs ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769554700.461784-59666-119645020953635/AnsiballZ_command.py'
Jan 27 22:58:21 compute-0 sudo[245345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:58:21 compute-0 python3[245347]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:58:21 compute-0 sudo[245345]: pam_unix(sudo:session): session closed for user root
Jan 27 22:58:22 compute-0 nova_compute[185650]: 2026-01-27 22:58:22.985 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:23 compute-0 podman[245387]: 2026-01-27 22:58:23.372143759 +0000 UTC m=+0.071738566 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., release-0.7.12=)
Jan 27 22:58:23 compute-0 podman[245388]: 2026-01-27 22:58:23.399906163 +0000 UTC m=+0.094111128 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 22:58:25 compute-0 nova_compute[185650]: 2026-01-27 22:58:25.333 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:27 compute-0 nova_compute[185650]: 2026-01-27 22:58:27.989 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:29 compute-0 podman[201529]: time="2026-01-27T22:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:58:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:58:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 27 22:58:30 compute-0 nova_compute[185650]: 2026-01-27 22:58:30.335 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:30 compute-0 sudo[245616]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-objsxojvfgabcrsbsixdvuzvgvnaabmn ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769554710.010482-59824-16829129947192/AnsiballZ_command.py'
Jan 27 22:58:30 compute-0 sudo[245616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:58:30 compute-0 podman[245578]: 2026-01-27 22:58:30.65009183 +0000 UTC m=+0.095488774 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 22:58:30 compute-0 python3[245629]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:58:30 compute-0 sudo[245616]: pam_unix(sudo:session): session closed for user root
Jan 27 22:58:31 compute-0 openstack_network_exporter[204648]: ERROR   22:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:58:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:58:31 compute-0 openstack_network_exporter[204648]: ERROR   22:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:58:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:58:32 compute-0 nova_compute[185650]: 2026-01-27 22:58:32.993 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:33 compute-0 podman[245668]: 2026-01-27 22:58:33.355513104 +0000 UTC m=+0.064221727 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 22:58:35 compute-0 nova_compute[185650]: 2026-01-27 22:58:35.337 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:37 compute-0 nova_compute[185650]: 2026-01-27 22:58:37.995 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:40 compute-0 nova_compute[185650]: 2026-01-27 22:58:40.341 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:42 compute-0 nova_compute[185650]: 2026-01-27 22:58:42.998 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:44 compute-0 podman[245689]: 2026-01-27 22:58:44.386870293 +0000 UTC m=+0.079430749 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 27 22:58:44 compute-0 podman[245690]: 2026-01-27 22:58:44.435265691 +0000 UTC m=+0.113658053 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 22:58:45 compute-0 sudo[245898]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywrldyjyzdelnrpqtdhdwmdbmihjyhpx ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769554724.5739727-60046-134456879636227/AnsiballZ_command.py'
Jan 27 22:58:45 compute-0 sudo[245898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 22:58:45 compute-0 nova_compute[185650]: 2026-01-27 22:58:45.343 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:45 compute-0 python3[245900]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 27 22:58:45 compute-0 sudo[245898]: pam_unix(sudo:session): session closed for user root
Jan 27 22:58:48 compute-0 nova_compute[185650]: 2026-01-27 22:58:48.002 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:49 compute-0 podman[245938]: 2026-01-27 22:58:49.361359091 +0000 UTC m=+0.060947781 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:58:50 compute-0 nova_compute[185650]: 2026-01-27 22:58:50.345 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:51 compute-0 podman[245963]: 2026-01-27 22:58:51.410642354 +0000 UTC m=+0.111689041 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Jan 27 22:58:53 compute-0 nova_compute[185650]: 2026-01-27 22:58:53.004 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:54 compute-0 podman[245982]: 2026-01-27 22:58:54.394582514 +0000 UTC m=+0.095160574 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0)
Jan 27 22:58:54 compute-0 podman[245983]: 2026-01-27 22:58:54.407747773 +0000 UTC m=+0.107265886 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 27 22:58:55 compute-0 nova_compute[185650]: 2026-01-27 22:58:55.349 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:57 compute-0 nova_compute[185650]: 2026-01-27 22:58:57.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:58:57 compute-0 nova_compute[185650]: 2026-01-27 22:58:57.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 22:58:58 compute-0 nova_compute[185650]: 2026-01-27 22:58:58.006 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:58:59 compute-0 podman[201529]: time="2026-01-27T22:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:58:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:58:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 27 22:58:59 compute-0 nova_compute[185650]: 2026-01-27 22:58:59.995 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:59:00 compute-0 nova_compute[185650]: 2026-01-27 22:59:00.352 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:01 compute-0 podman[246026]: 2026-01-27 22:59:01.387847392 +0000 UTC m=+0.089092075 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:59:01 compute-0 openstack_network_exporter[204648]: ERROR   22:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:59:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:59:01 compute-0 openstack_network_exporter[204648]: ERROR   22:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:59:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:59:01 compute-0 nova_compute[185650]: 2026-01-27 22:59:01.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:59:01 compute-0 nova_compute[185650]: 2026-01-27 22:59:01.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 22:59:02 compute-0 nova_compute[185650]: 2026-01-27 22:59:02.639 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 22:59:02 compute-0 nova_compute[185650]: 2026-01-27 22:59:02.641 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 22:59:02 compute-0 nova_compute[185650]: 2026-01-27 22:59:02.642 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 22:59:03 compute-0 nova_compute[185650]: 2026-01-27 22:59:03.008 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.028 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Updating instance_info_cache with network_info: [{"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.042 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.043 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.043 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.044 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.044 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.044 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.045 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.068 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.068 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.069 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.069 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.148 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:59:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:59:04.148 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:59:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:59:04.150 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:59:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 22:59:04.150 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.227 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.229 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.296 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.297 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.357 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.357 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:59:04 compute-0 podman[246053]: 2026-01-27 22:59:04.374087205 +0000 UTC m=+0.073253065 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-08-20T13:12:41)
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.416 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.422 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.480 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.481 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.574 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.575 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.631 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.632 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 22:59:04 compute-0 nova_compute[185650]: 2026-01-27 22:59:04.697 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.023 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.024 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4844MB free_disk=72.37072372436523GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.025 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.025 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.091 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.091 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 5409358c-78dc-4761-841a-7f453c6209fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.092 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.092 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.106 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing inventories for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.119 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating ProviderTree inventory for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.120 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating inventory in ProviderTree for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.134 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing aggregate associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.155 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing trait associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_AVX,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.211 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.227 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.228 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.229 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 22:59:05 compute-0 nova_compute[185650]: 2026-01-27 22:59:05.354 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:08 compute-0 nova_compute[185650]: 2026-01-27 22:59:08.011 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:10 compute-0 nova_compute[185650]: 2026-01-27 22:59:10.224 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:59:10 compute-0 nova_compute[185650]: 2026-01-27 22:59:10.225 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:59:10 compute-0 nova_compute[185650]: 2026-01-27 22:59:10.355 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:13 compute-0 nova_compute[185650]: 2026-01-27 22:59:13.013 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:14 compute-0 podman[246094]: 2026-01-27 22:59:14.750808563 +0000 UTC m=+0.069461986 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 27 22:59:14 compute-0 podman[246095]: 2026-01-27 22:59:14.756955486 +0000 UTC m=+0.073929984 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Jan 27 22:59:15 compute-0 nova_compute[185650]: 2026-01-27 22:59:15.358 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:18 compute-0 nova_compute[185650]: 2026-01-27 22:59:18.016 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:20 compute-0 nova_compute[185650]: 2026-01-27 22:59:20.360 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:20 compute-0 podman[246132]: 2026-01-27 22:59:20.416166522 +0000 UTC m=+0.102573643 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:59:22 compute-0 podman[246157]: 2026-01-27 22:59:22.378018345 +0000 UTC m=+0.073233161 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 22:59:23 compute-0 nova_compute[185650]: 2026-01-27 22:59:23.018 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:23 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 27 22:59:25 compute-0 nova_compute[185650]: 2026-01-27 22:59:25.364 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:25 compute-0 podman[246177]: 2026-01-27 22:59:25.407328074 +0000 UTC m=+0.089287908 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=kepler, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, maintainer=Red Hat, Inc.)
Jan 27 22:59:25 compute-0 podman[246178]: 2026-01-27 22:59:25.425966588 +0000 UTC m=+0.109885722 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 27 22:59:28 compute-0 nova_compute[185650]: 2026-01-27 22:59:28.022 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:29 compute-0 podman[201529]: time="2026-01-27T22:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:59:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:59:29 compute-0 podman[201529]: @ - - [27/Jan/2026:22:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Jan 27 22:59:30 compute-0 nova_compute[185650]: 2026-01-27 22:59:30.366 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:31 compute-0 openstack_network_exporter[204648]: ERROR   22:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 22:59:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:59:31 compute-0 openstack_network_exporter[204648]: ERROR   22:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 22:59:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 22:59:32 compute-0 podman[246222]: 2026-01-27 22:59:32.415458174 +0000 UTC m=+0.105114259 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 22:59:33 compute-0 nova_compute[185650]: 2026-01-27 22:59:33.024 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:35 compute-0 nova_compute[185650]: 2026-01-27 22:59:35.368 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:35 compute-0 podman[246245]: 2026-01-27 22:59:35.425094895 +0000 UTC m=+0.104891453 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Jan 27 22:59:38 compute-0 nova_compute[185650]: 2026-01-27 22:59:38.027 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.108 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.108 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.115 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5409358c-78dc-4761-841a-7f453c6209fb', 'name': 'vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {'metering.server_group': '3b67098f-eb50-41e2-8c8a-348367561673'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1b50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.118 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '344c74c3-95d6-4f19-993f-b4a89c9d074b', 'name': 'test_0', 'flavor': {'id': 'c6c4f9e1-1f0f-4f2a-a6d1-cf76828fe093', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7e803ca7-2382-4e5a-95f7-55acaa154415'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8318d5a200d74e4386cf4972db015b75', 'user_id': '7387204f74504e288ed7a5dee73f5083', 'hostId': '6b704d868c202dfce1245c3ae64d5f83176b88963479398e3b586eea', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.118 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c646060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c646060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T22:59:38.118878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.122 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.126 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.127 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.127 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.127 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.127 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.128 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.128 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.128 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.128 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.129 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.129 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.129 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T22:59:38.127626) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T22:59:38.128808) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T22:59:38.130135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.219 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 2048805649 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.220 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 9512100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.220 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.288 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 1982773015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.289 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 11972381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.289 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.290 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.291 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.291 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.291 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes volume: 2214 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.292 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.292 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.293 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.293 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T22:59:38.291048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.293 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.294 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.294 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.294 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T22:59:38.292975) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.295 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.296 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.296 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.297 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T22:59:38.296886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.297 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.298 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T22:59:38.298774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.325 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/cpu volume: 40110000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.346 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/cpu volume: 43790000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.347 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T22:59:38.348081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.348 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.349 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.349 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.349 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/memory.usage volume: 48.734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T22:59:38.349454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.350 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T22:59:38.351132) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.351 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.352 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.352 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.352 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.352 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T22:59:38.352441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.375 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.376 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.376 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.397 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.397 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.398 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.399 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.399 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645490>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645490>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.399 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.399 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.400 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.400 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T22:59:38.399361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.400 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.401 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.401 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.402 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.402 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.402 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.402 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 669467296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T22:59:38.402458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.403 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 92088857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.403 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.latency volume: 79077409 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.403 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 603707572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.403 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 113814738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.404 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.latency volume: 101138361 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.404 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.405 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T22:59:38.405213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.405 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.406 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.406 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.406 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.407 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.408 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.408 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.409 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T22:59:38.408445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.409 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.410 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.410 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.411 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.412 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.412 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T22:59:38.410145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T22:59:38.411990) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.413 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.413 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.413 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.413 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.414 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.414 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.414 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.414 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.414 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.415 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.415 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.415 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.415 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.416 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.416 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T22:59:38.414959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.416 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.416 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.417 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.417 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.417 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.417 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.417 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.418 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.418 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.418 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.419 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T22:59:38.417974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.420 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.420 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.420 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T22:59:38.419988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.420 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.421 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.421 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.421 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.422 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T22:59:38.421690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.422 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.423 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.423 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.423 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.423 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.423 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.423 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.423 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.424 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.424 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.425 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T22:59:38.423372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.425 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.425 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.426 14 DEBUG ceilometer.compute.pollsters [-] 5409358c-78dc-4761-841a-7f453c6209fb/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T22:59:38.425432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.426 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.426 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.427 14 DEBUG ceilometer.compute.pollsters [-] 344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.427 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.428 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.428 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.428 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.428 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.430 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.430 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.430 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.430 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.430 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.430 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.430 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.430 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 22:59:38.430 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 22:59:40 compute-0 nova_compute[185650]: 2026-01-27 22:59:40.370 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:43 compute-0 nova_compute[185650]: 2026-01-27 22:59:43.030 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:45 compute-0 nova_compute[185650]: 2026-01-27 22:59:45.372 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:45 compute-0 podman[246268]: 2026-01-27 22:59:45.383083176 +0000 UTC m=+0.083707773 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, managed_by=edpm_ansible, org.label-schema.build-date=20260126, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 22:59:45 compute-0 podman[246267]: 2026-01-27 22:59:45.401243617 +0000 UTC m=+0.108615768 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 22:59:45 compute-0 sshd-session[244880]: Received disconnect from 38.102.83.151 port 50012:11: disconnected by user
Jan 27 22:59:45 compute-0 sshd-session[244880]: Disconnected from user zuul 38.102.83.151 port 50012
Jan 27 22:59:45 compute-0 sshd-session[244877]: pam_unix(sshd:session): session closed for user zuul
Jan 27 22:59:45 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 27 22:59:45 compute-0 systemd[1]: session-30.scope: Consumed 4.187s CPU time.
Jan 27 22:59:45 compute-0 systemd-logind[789]: Session 30 logged out. Waiting for processes to exit.
Jan 27 22:59:45 compute-0 systemd-logind[789]: Removed session 30.
Jan 27 22:59:48 compute-0 nova_compute[185650]: 2026-01-27 22:59:48.032 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:50 compute-0 nova_compute[185650]: 2026-01-27 22:59:50.374 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:51 compute-0 podman[246305]: 2026-01-27 22:59:51.398189214 +0000 UTC m=+0.091910046 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 22:59:53 compute-0 nova_compute[185650]: 2026-01-27 22:59:53.035 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:53 compute-0 podman[246328]: 2026-01-27 22:59:53.400878939 +0000 UTC m=+0.095300234 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 27 22:59:55 compute-0 nova_compute[185650]: 2026-01-27 22:59:55.375 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:56 compute-0 podman[246348]: 2026-01-27 22:59:56.398659475 +0000 UTC m=+0.097475621 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_id=kepler, container_name=kepler, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.29.0, release-0.7.12=, version=9.4)
Jan 27 22:59:56 compute-0 podman[246349]: 2026-01-27 22:59:56.432546684 +0000 UTC m=+0.121468032 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 27 22:59:58 compute-0 nova_compute[185650]: 2026-01-27 22:59:58.037 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 22:59:58 compute-0 nova_compute[185650]: 2026-01-27 22:59:58.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 22:59:59 compute-0 podman[201529]: time="2026-01-27T22:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 22:59:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 22:59:59 compute-0 podman[201529]: @ - - [27/Jan/2026:22:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4371 "" "Go-http-client/1.1"
Jan 27 23:00:00 compute-0 nova_compute[185650]: 2026-01-27 23:00:00.009 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:00:00 compute-0 nova_compute[185650]: 2026-01-27 23:00:00.009 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 23:00:00 compute-0 nova_compute[185650]: 2026-01-27 23:00:00.377 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:01 compute-0 openstack_network_exporter[204648]: ERROR   23:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:00:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:00:01 compute-0 openstack_network_exporter[204648]: ERROR   23:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:00:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:00:01 compute-0 nova_compute[185650]: 2026-01-27 23:00:01.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:00:01 compute-0 nova_compute[185650]: 2026-01-27 23:00:01.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.025 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.026 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.026 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.026 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.110 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.211 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.213 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.269 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.271 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.331 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.333 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.390 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.397 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.468 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.469 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.527 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.529 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.588 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.590 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.658 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.969 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.971 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4870MB free_disk=72.3712272644043GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.972 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:02 compute-0 nova_compute[185650]: 2026-01-27 23:00:02.972 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:03 compute-0 nova_compute[185650]: 2026-01-27 23:00:03.040 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:03 compute-0 nova_compute[185650]: 2026-01-27 23:00:03.139 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 344c74c3-95d6-4f19-993f-b4a89c9d074b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:00:03 compute-0 nova_compute[185650]: 2026-01-27 23:00:03.139 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 5409358c-78dc-4761-841a-7f453c6209fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:00:03 compute-0 nova_compute[185650]: 2026-01-27 23:00:03.140 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 23:00:03 compute-0 nova_compute[185650]: 2026-01-27 23:00:03.140 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 23:00:03 compute-0 nova_compute[185650]: 2026-01-27 23:00:03.279 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:00:03 compute-0 nova_compute[185650]: 2026-01-27 23:00:03.299 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:00:03 compute-0 nova_compute[185650]: 2026-01-27 23:00:03.301 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 23:00:03 compute-0 nova_compute[185650]: 2026-01-27 23:00:03.301 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:00:03 compute-0 podman[246416]: 2026-01-27 23:00:03.380113925 +0000 UTC m=+0.086180277 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 23:00:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:04.154 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:04.156 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:04.157 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:00:04 compute-0 nova_compute[185650]: 2026-01-27 23:00:04.301 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:00:04 compute-0 nova_compute[185650]: 2026-01-27 23:00:04.302 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 23:00:04 compute-0 nova_compute[185650]: 2026-01-27 23:00:04.302 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 23:00:04 compute-0 nova_compute[185650]: 2026-01-27 23:00:04.652 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:00:04 compute-0 nova_compute[185650]: 2026-01-27 23:00:04.652 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:00:04 compute-0 nova_compute[185650]: 2026-01-27 23:00:04.653 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 23:00:04 compute-0 nova_compute[185650]: 2026-01-27 23:00:04.653 185654 DEBUG nova.objects.instance [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:00:05 compute-0 nova_compute[185650]: 2026-01-27 23:00:05.378 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:05 compute-0 nova_compute[185650]: 2026-01-27 23:00:05.707 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [{"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:00:05 compute-0 nova_compute[185650]: 2026-01-27 23:00:05.723 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-344c74c3-95d6-4f19-993f-b4a89c9d074b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:00:05 compute-0 nova_compute[185650]: 2026-01-27 23:00:05.723 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 23:00:05 compute-0 nova_compute[185650]: 2026-01-27 23:00:05.724 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:00:05 compute-0 nova_compute[185650]: 2026-01-27 23:00:05.724 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:00:05 compute-0 nova_compute[185650]: 2026-01-27 23:00:05.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:00:05 compute-0 nova_compute[185650]: 2026-01-27 23:00:05.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:00:05 compute-0 nova_compute[185650]: 2026-01-27 23:00:05.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:00:05 compute-0 nova_compute[185650]: 2026-01-27 23:00:05.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 23:00:06 compute-0 podman[246439]: 2026-01-27 23:00:06.375966597 +0000 UTC m=+0.072743789 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 27 23:00:08 compute-0 nova_compute[185650]: 2026-01-27 23:00:08.009 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:00:08 compute-0 nova_compute[185650]: 2026-01-27 23:00:08.010 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 23:00:08 compute-0 nova_compute[185650]: 2026-01-27 23:00:08.042 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:08 compute-0 nova_compute[185650]: 2026-01-27 23:00:08.050 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 23:00:10 compute-0 nova_compute[185650]: 2026-01-27 23:00:10.028 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:00:10 compute-0 nova_compute[185650]: 2026-01-27 23:00:10.380 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:13 compute-0 nova_compute[185650]: 2026-01-27 23:00:13.044 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:15 compute-0 nova_compute[185650]: 2026-01-27 23:00:15.383 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:16 compute-0 podman[246461]: 2026-01-27 23:00:16.415478842 +0000 UTC m=+0.100177241 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 23:00:16 compute-0 podman[246460]: 2026-01-27 23:00:16.424547737 +0000 UTC m=+0.122036188 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 27 23:00:18 compute-0 nova_compute[185650]: 2026-01-27 23:00:18.047 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:20 compute-0 nova_compute[185650]: 2026-01-27 23:00:20.386 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:22 compute-0 podman[246496]: 2026-01-27 23:00:22.373389299 +0000 UTC m=+0.074812842 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 23:00:23 compute-0 nova_compute[185650]: 2026-01-27 23:00:23.049 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:24 compute-0 podman[246520]: 2026-01-27 23:00:24.393559737 +0000 UTC m=+0.093899307 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 27 23:00:25 compute-0 nova_compute[185650]: 2026-01-27 23:00:25.389 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:27 compute-0 podman[246540]: 2026-01-27 23:00:27.372168364 +0000 UTC m=+0.078152909 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9, vcs-type=git, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 23:00:27 compute-0 podman[246541]: 2026-01-27 23:00:27.40749649 +0000 UTC m=+0.108188997 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 27 23:00:28 compute-0 nova_compute[185650]: 2026-01-27 23:00:28.050 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:29 compute-0 podman[201529]: time="2026-01-27T23:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:00:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 27 23:00:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Jan 27 23:00:30 compute-0 nova_compute[185650]: 2026-01-27 23:00:30.392 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:31 compute-0 openstack_network_exporter[204648]: ERROR   23:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:00:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:00:31 compute-0 openstack_network_exporter[204648]: ERROR   23:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:00:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:00:33 compute-0 nova_compute[185650]: 2026-01-27 23:00:33.053 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:34 compute-0 podman[246582]: 2026-01-27 23:00:34.437824359 +0000 UTC m=+0.138178516 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 23:00:35 compute-0 nova_compute[185650]: 2026-01-27 23:00:35.395 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:37 compute-0 podman[246605]: 2026-01-27 23:00:37.377027043 +0000 UTC m=+0.084800301 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git)
Jan 27 23:00:38 compute-0 nova_compute[185650]: 2026-01-27 23:00:38.055 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:40 compute-0 nova_compute[185650]: 2026-01-27 23:00:40.398 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:43 compute-0 nova_compute[185650]: 2026-01-27 23:00:43.057 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.378 185654 DEBUG oslo_concurrency.lockutils [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5409358c-78dc-4761-841a-7f453c6209fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.379 185654 DEBUG oslo_concurrency.lockutils [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.379 185654 DEBUG oslo_concurrency.lockutils [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "5409358c-78dc-4761-841a-7f453c6209fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.380 185654 DEBUG oslo_concurrency.lockutils [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.380 185654 DEBUG oslo_concurrency.lockutils [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.381 185654 INFO nova.compute.manager [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Terminating instance
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.382 185654 DEBUG nova.compute.manager [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.400 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 kernel: tapccfe58e9-3f (unregistering): left promiscuous mode
Jan 27 23:00:45 compute-0 NetworkManager[56600]: <info>  [1769554845.4235] device (tapccfe58e9-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 23:00:45 compute-0 ovn_controller[98048]: 2026-01-27T23:00:45Z|00058|binding|INFO|Releasing lport ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 from this chassis (sb_readonly=0)
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.435 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 ovn_controller[98048]: 2026-01-27T23:00:45Z|00059|binding|INFO|Setting lport ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 down in Southbound
Jan 27 23:00:45 compute-0 ovn_controller[98048]: 2026-01-27T23:00:45Z|00060|binding|INFO|Removing iface tapccfe58e9-3f ovn-installed in OVS
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.440 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.448 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:dc:a3 192.168.0.99'], port_security=['fa:16:3e:17:dc:a3 192.168.0.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e3ismbxiivp3-je4u2ztq4ixb-joz7rt6vemeh-port-xhiell7bdepe', 'neutron:cidrs': '192.168.0.99/24', 'neutron:device_id': '5409358c-78dc-4761-841a-7f453c6209fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98f694e3-becc-413f-b42b-35a7171f7f96', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e3ismbxiivp3-je4u2ztq4ixb-joz7rt6vemeh-port-xhiell7bdepe', 'neutron:project_id': '8318d5a200d74e4386cf4972db015b75', 'neutron:revision_number': '4', 'neutron:security_group_ids': '597f1057-390b-408a-b8d0-705fb45de27b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d21d3e2-2f64-49c8-bca6-9efc66f5bd67, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=ccfe58e9-3ff7-4073-9f9f-c8e641661ba0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.451 107302 INFO neutron.agent.ovn.metadata.agent [-] Port ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 in datapath 98f694e3-becc-413f-b42b-35a7171f7f96 unbound from our chassis
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.454 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98f694e3-becc-413f-b42b-35a7171f7f96
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.459 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.471 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[8faad155-f8ce-4ad4-8457-3898d36f407d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:45 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 27 23:00:45 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 46.681s CPU time.
Jan 27 23:00:45 compute-0 systemd-machined[157036]: Machine qemu-4-instance-00000004 terminated.
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.508 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[1e6f93e6-2322-44f0-bf69-6df98bda808d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.511 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee4c81d-d5a1-4005-a248-ffc729ca4fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.539 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[e32e4e74-9b87-4bd0-886a-295d8a2f06ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.564 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[000b7249-e044-46aa-9a45-d87d4569c407]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98f694e3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:25:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365000, 'reachable_time': 31315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246637, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.590 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[0167aca6-d1d8-4ce0-9566-9443cfcf9949]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365013, 'tstamp': 365013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246638, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap98f694e3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365017, 'tstamp': 365017}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246638, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.591 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f694e3-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.593 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.599 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.600 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98f694e3-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.600 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.600 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98f694e3-b0, col_values=(('external_ids', {'iface-id': 'acacffcb-4de9-40c5-aeef-3e5766b557e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.601 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.615 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.622 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.695 185654 INFO nova.virt.libvirt.driver [-] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Instance destroyed successfully.
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.696 185654 DEBUG nova.objects.instance [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'resources' on Instance uuid 5409358c-78dc-4761-841a-7f453c6209fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.709 185654 DEBUG nova.virt.libvirt.vif [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T22:50:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-bxiivp3-je4u2ztq4ixb-joz7rt6vemeh-vnf-jpr5uezxduem',id=4,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T22:50:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='3b67098f-eb50-41e2-8c8a-348367561673'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-hvzumw9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T22:50:45Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEwMDIwOTc2NTkzMjc1NjI5OTE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTAwMjA5NzY1OTMyNzU2Mjk5MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEwMDIwOTc2NTkzMjc1NjI5OTE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 27 23:00:45 compute-0 nova_compute[185650]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTAwMjA5NzY1OTMyNzU2Mjk5MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEwMDIwOTc2NTkzMjc1NjI5OTE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMDAyMDk3NjU5MzI3NTYyOTkxPT0tLQo=',user_id='7387204f74504e288ed7a5dee73f5083',uuid=5409358c-78dc-4761-841a-7f453c6209fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.709 185654 DEBUG nova.network.os_vif_util [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "address": "fa:16:3e:17:dc:a3", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccfe58e9-3f", "ovs_interfaceid": "ccfe58e9-3ff7-4073-9f9f-c8e641661ba0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.710 185654 DEBUG nova.network.os_vif_util [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:dc:a3,bridge_name='br-int',has_traffic_filtering=True,id=ccfe58e9-3ff7-4073-9f9f-c8e641661ba0,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapccfe58e9-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.711 185654 DEBUG os_vif [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:dc:a3,bridge_name='br-int',has_traffic_filtering=True,id=ccfe58e9-3ff7-4073-9f9f-c8e641661ba0,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapccfe58e9-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 23:00:45 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 23:00:45.709 185654 DEBUG nova.virt.libvirt.vif [None req-a27645c9-9e [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.712 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.713 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccfe58e9-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.715 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.716 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.720 185654 INFO os_vif [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:dc:a3,bridge_name='br-int',has_traffic_filtering=True,id=ccfe58e9-3ff7-4073-9f9f-c8e641661ba0,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapccfe58e9-3f')
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.720 185654 INFO nova.virt.libvirt.driver [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Deleting instance files /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb_del
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.721 185654 INFO nova.virt.libvirt.driver [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Deletion of /var/lib/nova/instances/5409358c-78dc-4761-841a-7f453c6209fb_del complete
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.775 185654 DEBUG nova.compute.manager [req-a05db209-3fc1-4f12-b146-a383b657d457 req-621dfc73-7e8c-413a-8b09-a0a9e921aec5 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Received event network-vif-unplugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.775 185654 DEBUG oslo_concurrency.lockutils [req-a05db209-3fc1-4f12-b146-a383b657d457 req-621dfc73-7e8c-413a-8b09-a0a9e921aec5 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "5409358c-78dc-4761-841a-7f453c6209fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.776 185654 DEBUG oslo_concurrency.lockutils [req-a05db209-3fc1-4f12-b146-a383b657d457 req-621dfc73-7e8c-413a-8b09-a0a9e921aec5 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.776 185654 DEBUG oslo_concurrency.lockutils [req-a05db209-3fc1-4f12-b146-a383b657d457 req-621dfc73-7e8c-413a-8b09-a0a9e921aec5 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.776 185654 DEBUG nova.compute.manager [req-a05db209-3fc1-4f12-b146-a383b657d457 req-621dfc73-7e8c-413a-8b09-a0a9e921aec5 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] No waiting events found dispatching network-vif-unplugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.777 185654 DEBUG nova.compute.manager [req-a05db209-3fc1-4f12-b146-a383b657d457 req-621dfc73-7e8c-413a-8b09-a0a9e921aec5 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Received event network-vif-unplugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.785 185654 INFO nova.compute.manager [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Took 0.40 seconds to destroy the instance on the hypervisor.
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.786 185654 DEBUG oslo.service.loopingcall [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.786 185654 DEBUG nova.compute.manager [-] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.786 185654 DEBUG nova.network.neutron [-] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.928 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:41:f8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '26:ae:8e:b8:80:28'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:00:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:45.928 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 23:00:45 compute-0 nova_compute[185650]: 2026-01-27 23:00:45.932 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.627 185654 DEBUG nova.compute.manager [req-254cc67a-26f7-453e-8d3d-6baa6db4806c req-33fa75f2-5f1c-4c7e-b4aa-1699926bb898 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Received event network-changed-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.628 185654 DEBUG nova.compute.manager [req-254cc67a-26f7-453e-8d3d-6baa6db4806c req-33fa75f2-5f1c-4c7e-b4aa-1699926bb898 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Refreshing instance network info cache due to event network-changed-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.628 185654 DEBUG oslo_concurrency.lockutils [req-254cc67a-26f7-453e-8d3d-6baa6db4806c req-33fa75f2-5f1c-4c7e-b4aa-1699926bb898 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.629 185654 DEBUG oslo_concurrency.lockutils [req-254cc67a-26f7-453e-8d3d-6baa6db4806c req-33fa75f2-5f1c-4c7e-b4aa-1699926bb898 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.629 185654 DEBUG nova.network.neutron [req-254cc67a-26f7-453e-8d3d-6baa6db4806c req-33fa75f2-5f1c-4c7e-b4aa-1699926bb898 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Refreshing network info cache for port ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.781 185654 DEBUG nova.network.neutron [-] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.803 185654 INFO nova.compute.manager [-] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Took 1.02 seconds to deallocate network for instance.
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.811 185654 INFO nova.network.neutron [req-254cc67a-26f7-453e-8d3d-6baa6db4806c req-33fa75f2-5f1c-4c7e-b4aa-1699926bb898 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Port ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.811 185654 DEBUG nova.network.neutron [req-254cc67a-26f7-453e-8d3d-6baa6db4806c req-33fa75f2-5f1c-4c7e-b4aa-1699926bb898 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.847 185654 DEBUG oslo_concurrency.lockutils [req-254cc67a-26f7-453e-8d3d-6baa6db4806c req-33fa75f2-5f1c-4c7e-b4aa-1699926bb898 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-5409358c-78dc-4761-841a-7f453c6209fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.853 185654 DEBUG oslo_concurrency.lockutils [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.854 185654 DEBUG oslo_concurrency.lockutils [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:46 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:46.931 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.935 185654 DEBUG nova.compute.provider_tree [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.952 185654 DEBUG nova.scheduler.client.report [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:00:46 compute-0 nova_compute[185650]: 2026-01-27 23:00:46.979 185654 DEBUG oslo_concurrency.lockutils [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:00:47 compute-0 nova_compute[185650]: 2026-01-27 23:00:47.017 185654 INFO nova.scheduler.client.report [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Deleted allocations for instance 5409358c-78dc-4761-841a-7f453c6209fb
Jan 27 23:00:47 compute-0 nova_compute[185650]: 2026-01-27 23:00:47.088 185654 DEBUG oslo_concurrency.lockutils [None req-a27645c9-9e91-4fb7-b704-4aa4953b6f43 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:00:47 compute-0 podman[246661]: 2026-01-27 23:00:47.430637901 +0000 UTC m=+0.115965010 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Jan 27 23:00:47 compute-0 podman[246660]: 2026-01-27 23:00:47.4691318 +0000 UTC m=+0.147936200 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 23:00:47 compute-0 nova_compute[185650]: 2026-01-27 23:00:47.855 185654 DEBUG nova.compute.manager [req-c905a1fe-e444-4606-a6ec-2e3e9f0091da req-58be959d-e7c0-41e7-8f1d-1b580805a68a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Received event network-vif-plugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:00:47 compute-0 nova_compute[185650]: 2026-01-27 23:00:47.855 185654 DEBUG oslo_concurrency.lockutils [req-c905a1fe-e444-4606-a6ec-2e3e9f0091da req-58be959d-e7c0-41e7-8f1d-1b580805a68a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "5409358c-78dc-4761-841a-7f453c6209fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:47 compute-0 nova_compute[185650]: 2026-01-27 23:00:47.856 185654 DEBUG oslo_concurrency.lockutils [req-c905a1fe-e444-4606-a6ec-2e3e9f0091da req-58be959d-e7c0-41e7-8f1d-1b580805a68a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:47 compute-0 nova_compute[185650]: 2026-01-27 23:00:47.856 185654 DEBUG oslo_concurrency.lockutils [req-c905a1fe-e444-4606-a6ec-2e3e9f0091da req-58be959d-e7c0-41e7-8f1d-1b580805a68a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "5409358c-78dc-4761-841a-7f453c6209fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:00:47 compute-0 nova_compute[185650]: 2026-01-27 23:00:47.856 185654 DEBUG nova.compute.manager [req-c905a1fe-e444-4606-a6ec-2e3e9f0091da req-58be959d-e7c0-41e7-8f1d-1b580805a68a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] No waiting events found dispatching network-vif-plugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:00:47 compute-0 nova_compute[185650]: 2026-01-27 23:00:47.856 185654 WARNING nova.compute.manager [req-c905a1fe-e444-4606-a6ec-2e3e9f0091da req-58be959d-e7c0-41e7-8f1d-1b580805a68a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Received unexpected event network-vif-plugged-ccfe58e9-3ff7-4073-9f9f-c8e641661ba0 for instance with vm_state deleted and task_state None.
Jan 27 23:00:50 compute-0 nova_compute[185650]: 2026-01-27 23:00:50.402 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:50 compute-0 nova_compute[185650]: 2026-01-27 23:00:50.715 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:53 compute-0 podman[246697]: 2026-01-27 23:00:53.382509613 +0000 UTC m=+0.076626970 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 23:00:55 compute-0 nova_compute[185650]: 2026-01-27 23:00:55.406 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:55 compute-0 podman[246720]: 2026-01-27 23:00:55.430502853 +0000 UTC m=+0.124511022 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 27 23:00:55 compute-0 nova_compute[185650]: 2026-01-27 23:00:55.717 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:56 compute-0 nova_compute[185650]: 2026-01-27 23:00:56.984 185654 DEBUG oslo_concurrency.lockutils [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:56 compute-0 nova_compute[185650]: 2026-01-27 23:00:56.984 185654 DEBUG oslo_concurrency.lockutils [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:56 compute-0 nova_compute[185650]: 2026-01-27 23:00:56.984 185654 DEBUG oslo_concurrency.lockutils [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:56 compute-0 nova_compute[185650]: 2026-01-27 23:00:56.984 185654 DEBUG oslo_concurrency.lockutils [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:56 compute-0 nova_compute[185650]: 2026-01-27 23:00:56.985 185654 DEBUG oslo_concurrency.lockutils [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:00:56 compute-0 nova_compute[185650]: 2026-01-27 23:00:56.985 185654 INFO nova.compute.manager [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Terminating instance
Jan 27 23:00:56 compute-0 nova_compute[185650]: 2026-01-27 23:00:56.986 185654 DEBUG nova.compute.manager [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 23:00:57 compute-0 kernel: tap389fa2e1-24 (unregistering): left promiscuous mode
Jan 27 23:00:57 compute-0 NetworkManager[56600]: <info>  [1769554857.0263] device (tap389fa2e1-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.036 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:57 compute-0 ovn_controller[98048]: 2026-01-27T23:00:57Z|00061|binding|INFO|Releasing lport 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 from this chassis (sb_readonly=0)
Jan 27 23:00:57 compute-0 ovn_controller[98048]: 2026-01-27T23:00:57Z|00062|binding|INFO|Setting lport 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 down in Southbound
Jan 27 23:00:57 compute-0 ovn_controller[98048]: 2026-01-27T23:00:57Z|00063|binding|INFO|Removing iface tap389fa2e1-24 ovn-installed in OVS
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.038 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.043 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:72:fe 192.168.0.119'], port_security=['fa:16:3e:27:72:fe 192.168.0.119'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.119/24', 'neutron:device_id': '344c74c3-95d6-4f19-993f-b4a89c9d074b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98f694e3-becc-413f-b42b-35a7171f7f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8318d5a200d74e4386cf4972db015b75', 'neutron:revision_number': '4', 'neutron:security_group_ids': '597f1057-390b-408a-b8d0-705fb45de27b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d21d3e2-2f64-49c8-bca6-9efc66f5bd67, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=389fa2e1-24bb-48bb-a577-b2f7ade8ddc5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.045 107302 INFO neutron.agent.ovn.metadata.agent [-] Port 389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 in datapath 98f694e3-becc-413f-b42b-35a7171f7f96 unbound from our chassis
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.046 107302 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 98f694e3-becc-413f-b42b-35a7171f7f96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.049 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[86070d48-b68c-4723-a2d4-e4e007ea05a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.050 107302 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96 namespace which is not needed anymore
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.058 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:57 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 27 23:00:57 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 2min 41.230s CPU time.
Jan 27 23:00:57 compute-0 systemd-machined[157036]: Machine qemu-1-instance-00000001 terminated.
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.217 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.228 185654 DEBUG nova.compute.manager [req-a7693ed9-138e-49ca-b76a-c0762f4e3378 req-fe59281c-d233-4d15-9f1b-20731de1aee9 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Received event network-vif-unplugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.229 185654 DEBUG oslo_concurrency.lockutils [req-a7693ed9-138e-49ca-b76a-c0762f4e3378 req-fe59281c-d233-4d15-9f1b-20731de1aee9 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.230 185654 DEBUG oslo_concurrency.lockutils [req-a7693ed9-138e-49ca-b76a-c0762f4e3378 req-fe59281c-d233-4d15-9f1b-20731de1aee9 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.232 185654 DEBUG oslo_concurrency.lockutils [req-a7693ed9-138e-49ca-b76a-c0762f4e3378 req-fe59281c-d233-4d15-9f1b-20731de1aee9 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.233 185654 DEBUG nova.compute.manager [req-a7693ed9-138e-49ca-b76a-c0762f4e3378 req-fe59281c-d233-4d15-9f1b-20731de1aee9 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] No waiting events found dispatching network-vif-unplugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.233 185654 DEBUG nova.compute.manager [req-a7693ed9-138e-49ca-b76a-c0762f4e3378 req-fe59281c-d233-4d15-9f1b-20731de1aee9 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Received event network-vif-unplugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.234 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:57 compute-0 neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96[238855]: [NOTICE]   (238859) : haproxy version is 2.8.14-c23fe91
Jan 27 23:00:57 compute-0 neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96[238855]: [NOTICE]   (238859) : path to executable is /usr/sbin/haproxy
Jan 27 23:00:57 compute-0 neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96[238855]: [WARNING]  (238859) : Exiting Master process...
Jan 27 23:00:57 compute-0 neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96[238855]: [ALERT]    (238859) : Current worker (238861) exited with code 143 (Terminated)
Jan 27 23:00:57 compute-0 neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96[238855]: [WARNING]  (238859) : All workers exited. Exiting... (0)
Jan 27 23:00:57 compute-0 systemd[1]: libpod-2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe.scope: Deactivated successfully.
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.293 185654 INFO nova.virt.libvirt.driver [-] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Instance destroyed successfully.
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.294 185654 DEBUG nova.objects.instance [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lazy-loading 'resources' on Instance uuid 344c74c3-95d6-4f19-993f-b4a89c9d074b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:00:57 compute-0 podman[246766]: 2026-01-27 23:00:57.299129949 +0000 UTC m=+0.092531052 container died 2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.358 185654 DEBUG nova.virt.libvirt.vif [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T22:43:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T22:43:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8318d5a200d74e4386cf4972db015b75',ramdisk_id='',reservation_id='r-rgck83ce',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='7e803ca7-2382-4e5a-95f7-55acaa154415',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T22:43:35Z,user_data=None,user_id='7387204f74504e288ed7a5dee73f5083',uuid=344c74c3-95d6-4f19-993f-b4a89c9d074b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.359 185654 DEBUG nova.network.os_vif_util [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converting VIF {"id": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "address": "fa:16:3e:27:72:fe", "network": {"id": "98f694e3-becc-413f-b42b-35a7171f7f96", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.119", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8318d5a200d74e4386cf4972db015b75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap389fa2e1-24", "ovs_interfaceid": "389fa2e1-24bb-48bb-a577-b2f7ade8ddc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.360 185654 DEBUG nova.network.os_vif_util [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:72:fe,bridge_name='br-int',has_traffic_filtering=True,id=389fa2e1-24bb-48bb-a577-b2f7ade8ddc5,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap389fa2e1-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.360 185654 DEBUG os_vif [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:72:fe,bridge_name='br-int',has_traffic_filtering=True,id=389fa2e1-24bb-48bb-a577-b2f7ade8ddc5,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap389fa2e1-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.362 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.363 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap389fa2e1-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.365 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.367 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.373 185654 INFO os_vif [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:72:fe,bridge_name='br-int',has_traffic_filtering=True,id=389fa2e1-24bb-48bb-a577-b2f7ade8ddc5,network=Network(98f694e3-becc-413f-b42b-35a7171f7f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap389fa2e1-24')
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.374 185654 INFO nova.virt.libvirt.driver [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Deleting instance files /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b_del
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.376 185654 INFO nova.virt.libvirt.driver [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Deletion of /var/lib/nova/instances/344c74c3-95d6-4f19-993f-b4a89c9d074b_del complete
Jan 27 23:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-93608add820367f9d9c7ed63e554c8796492e0eb1461a1e19ba1cd3745b99a2c-merged.mount: Deactivated successfully.
Jan 27 23:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe-userdata-shm.mount: Deactivated successfully.
Jan 27 23:00:57 compute-0 podman[246766]: 2026-01-27 23:00:57.407816669 +0000 UTC m=+0.201217782 container cleanup 2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.424 185654 INFO nova.compute.manager [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Took 0.44 seconds to destroy the instance on the hypervisor.
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.425 185654 DEBUG oslo.service.loopingcall [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.426 185654 DEBUG nova.compute.manager [-] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.426 185654 DEBUG nova.network.neutron [-] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 23:00:57 compute-0 systemd[1]: libpod-conmon-2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe.scope: Deactivated successfully.
Jan 27 23:00:57 compute-0 podman[246816]: 2026-01-27 23:00:57.503581874 +0000 UTC m=+0.067226325 container remove 2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.514 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[a245c87e-c629-4ff4-be2a-338b03f58191]: (4, ('Tue Jan 27 11:00:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96 (2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe)\n2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe\nTue Jan 27 11:00:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96 (2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe)\n2fe55454a57ca8e01dce97f654e0a47b037abf96a1e82df059c72ef4ce87c3fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.516 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[137a45f3-772d-4212-bd81-a0dd543c8861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.517 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f694e3-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:00:57 compute-0 kernel: tap98f694e3-b0: left promiscuous mode
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.521 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:57 compute-0 nova_compute[185650]: 2026-01-27 23:00:57.539 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.541 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c3fc01-b3eb-45ea-820e-d9cbc280b06c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:57 compute-0 podman[246809]: 2026-01-27 23:00:57.550958854 +0000 UTC m=+0.121406272 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, vendor=Red Hat, Inc., release=1214.1726694543, io.buildah.version=1.29.0, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=kepler, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.556 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[662852bf-2074-40e1-92b0-553676c3f047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.557 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[1619f9cf-4ce7-4316-9c55-66b8045cf761]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.577 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[bc153a35-e6cf-47b9-bcf5-c668529ec7b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 364990, 'reachable_time': 25582, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246864, 'error': None, 'target': 'ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d98f694e3\x2dbecc\x2d413f\x2db42b\x2d35a7171f7f96.mount: Deactivated successfully.
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.593 107797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-98f694e3-becc-413f-b42b-35a7171f7f96 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 23:00:57 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:00:57.593 107797 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd84722-d6c2-4f7c-b771-22a8b76da3e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:00:57 compute-0 podman[246822]: 2026-01-27 23:00:57.603629959 +0000 UTC m=+0.148281128 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 23:00:59 compute-0 nova_compute[185650]: 2026-01-27 23:00:59.305 185654 DEBUG nova.compute.manager [req-aaf269b4-a103-4a7b-a371-8a8a24698f75 req-ccb45c76-66eb-4c9f-a0df-25e6e61d923b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Received event network-vif-plugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:00:59 compute-0 nova_compute[185650]: 2026-01-27 23:00:59.305 185654 DEBUG oslo_concurrency.lockutils [req-aaf269b4-a103-4a7b-a371-8a8a24698f75 req-ccb45c76-66eb-4c9f-a0df-25e6e61d923b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:00:59 compute-0 nova_compute[185650]: 2026-01-27 23:00:59.306 185654 DEBUG oslo_concurrency.lockutils [req-aaf269b4-a103-4a7b-a371-8a8a24698f75 req-ccb45c76-66eb-4c9f-a0df-25e6e61d923b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:00:59 compute-0 nova_compute[185650]: 2026-01-27 23:00:59.306 185654 DEBUG oslo_concurrency.lockutils [req-aaf269b4-a103-4a7b-a371-8a8a24698f75 req-ccb45c76-66eb-4c9f-a0df-25e6e61d923b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:00:59 compute-0 nova_compute[185650]: 2026-01-27 23:00:59.306 185654 DEBUG nova.compute.manager [req-aaf269b4-a103-4a7b-a371-8a8a24698f75 req-ccb45c76-66eb-4c9f-a0df-25e6e61d923b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] No waiting events found dispatching network-vif-plugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:00:59 compute-0 nova_compute[185650]: 2026-01-27 23:00:59.307 185654 WARNING nova.compute.manager [req-aaf269b4-a103-4a7b-a371-8a8a24698f75 req-ccb45c76-66eb-4c9f-a0df-25e6e61d923b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Received unexpected event network-vif-plugged-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 for instance with vm_state active and task_state deleting.
Jan 27 23:00:59 compute-0 podman[201529]: time="2026-01-27T23:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:00:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 23:00:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.410 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.694 185654 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769554845.6921682, 5409358c-78dc-4761-841a-7f453c6209fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.694 185654 INFO nova.compute.manager [-] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] VM Stopped (Lifecycle Event)
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.718 185654 DEBUG nova.compute.manager [None req-1bc6d84b-f4f0-4300-8f1d-d32010b7cca2 - - - - - -] [instance: 5409358c-78dc-4761-841a-7f453c6209fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.719 185654 DEBUG nova.network.neutron [-] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.735 185654 INFO nova.compute.manager [-] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Took 3.31 seconds to deallocate network for instance.
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.777 185654 DEBUG oslo_concurrency.lockutils [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.778 185654 DEBUG oslo_concurrency.lockutils [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.834 185654 DEBUG nova.compute.provider_tree [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.853 185654 DEBUG nova.scheduler.client.report [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.873 185654 DEBUG oslo_concurrency.lockutils [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.896 185654 INFO nova.scheduler.client.report [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Deleted allocations for instance 344c74c3-95d6-4f19-993f-b4a89c9d074b
Jan 27 23:01:00 compute-0 nova_compute[185650]: 2026-01-27 23:01:00.964 185654 DEBUG oslo_concurrency.lockutils [None req-bd8944e1-34ea-4742-976d-ae02fcfb44a6 7387204f74504e288ed7a5dee73f5083 8318d5a200d74e4386cf4972db015b75 - - default default] Lock "344c74c3-95d6-4f19-993f-b4a89c9d074b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:01:01 compute-0 CROND[246873]: (root) CMD (run-parts /etc/cron.hourly)
Jan 27 23:01:01 compute-0 run-parts[246876]: (/etc/cron.hourly) starting 0anacron
Jan 27 23:01:01 compute-0 run-parts[246882]: (/etc/cron.hourly) finished 0anacron
Jan 27 23:01:01 compute-0 CROND[246872]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 27 23:01:01 compute-0 nova_compute[185650]: 2026-01-27 23:01:01.383 185654 DEBUG nova.compute.manager [req-3072c343-4df6-4bcb-bc2b-dbd24d435659 req-1a1f441d-404e-4676-b680-6eb7180a8d34 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Received event network-vif-deleted-389fa2e1-24bb-48bb-a577-b2f7ade8ddc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:01:01 compute-0 openstack_network_exporter[204648]: ERROR   23:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:01:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:01:01 compute-0 openstack_network_exporter[204648]: ERROR   23:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:01:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:01:01 compute-0 nova_compute[185650]: 2026-01-27 23:01:01.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:01:01 compute-0 nova_compute[185650]: 2026-01-27 23:01:01.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:01:01 compute-0 nova_compute[185650]: 2026-01-27 23:01:01.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 23:01:01 compute-0 nova_compute[185650]: 2026-01-27 23:01:01.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.016 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.017 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.017 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.018 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.330 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.332 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5378MB free_disk=72.41479110717773GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.332 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.333 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.367 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.386 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.386 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.523 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.535 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.566 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 23:01:02 compute-0 nova_compute[185650]: 2026-01-27 23:01:02.567 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:01:03 compute-0 nova_compute[185650]: 2026-01-27 23:01:03.568 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:01:03 compute-0 nova_compute[185650]: 2026-01-27 23:01:03.568 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 23:01:03 compute-0 nova_compute[185650]: 2026-01-27 23:01:03.586 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 23:01:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:01:04.154 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:01:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:01:04.154 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:01:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:01:04.154 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:01:04 compute-0 nova_compute[185650]: 2026-01-27 23:01:04.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:01:04 compute-0 nova_compute[185650]: 2026-01-27 23:01:04.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:01:05 compute-0 podman[246884]: 2026-01-27 23:01:05.377664935 +0000 UTC m=+0.077142312 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 23:01:05 compute-0 nova_compute[185650]: 2026-01-27 23:01:05.412 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:05 compute-0 nova_compute[185650]: 2026-01-27 23:01:05.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:01:05 compute-0 nova_compute[185650]: 2026-01-27 23:01:05.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:01:06 compute-0 nova_compute[185650]: 2026-01-27 23:01:06.989 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:01:07 compute-0 nova_compute[185650]: 2026-01-27 23:01:07.371 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:08 compute-0 podman[246907]: 2026-01-27 23:01:08.381177129 +0000 UTC m=+0.084285669 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal)
Jan 27 23:01:10 compute-0 nova_compute[185650]: 2026-01-27 23:01:10.415 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:11 compute-0 nova_compute[185650]: 2026-01-27 23:01:11.007 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:01:12 compute-0 nova_compute[185650]: 2026-01-27 23:01:12.288 185654 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769554857.2862105, 344c74c3-95d6-4f19-993f-b4a89c9d074b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:01:12 compute-0 nova_compute[185650]: 2026-01-27 23:01:12.288 185654 INFO nova.compute.manager [-] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] VM Stopped (Lifecycle Event)
Jan 27 23:01:12 compute-0 nova_compute[185650]: 2026-01-27 23:01:12.308 185654 DEBUG nova.compute.manager [None req-c65eb939-f574-4048-9b4c-431713850c4a - - - - - -] [instance: 344c74c3-95d6-4f19-993f-b4a89c9d074b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:01:12 compute-0 nova_compute[185650]: 2026-01-27 23:01:12.376 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:15 compute-0 nova_compute[185650]: 2026-01-27 23:01:15.416 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:17 compute-0 nova_compute[185650]: 2026-01-27 23:01:17.381 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:18 compute-0 podman[246928]: 2026-01-27 23:01:18.385604038 +0000 UTC m=+0.083017265 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 23:01:18 compute-0 podman[246929]: 2026-01-27 23:01:18.38798776 +0000 UTC m=+0.080863279 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 27 23:01:20 compute-0 nova_compute[185650]: 2026-01-27 23:01:20.420 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:22 compute-0 nova_compute[185650]: 2026-01-27 23:01:22.395 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:24 compute-0 podman[246962]: 2026-01-27 23:01:24.415272782 +0000 UTC m=+0.109689077 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 23:01:25 compute-0 nova_compute[185650]: 2026-01-27 23:01:25.423 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:26 compute-0 podman[246986]: 2026-01-27 23:01:26.38188385 +0000 UTC m=+0.083389723 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 23:01:27 compute-0 nova_compute[185650]: 2026-01-27 23:01:27.400 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:28 compute-0 podman[247007]: 2026-01-27 23:01:28.38928269 +0000 UTC m=+0.090096014 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, release-0.7.12=, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, version=9.4, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 23:01:28 compute-0 podman[247008]: 2026-01-27 23:01:28.422100945 +0000 UTC m=+0.115466059 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 23:01:29 compute-0 ovn_controller[98048]: 2026-01-27T23:01:29Z|00064|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 27 23:01:29 compute-0 podman[201529]: time="2026-01-27T23:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:01:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 23:01:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3912 "" "Go-http-client/1.1"
Jan 27 23:01:30 compute-0 nova_compute[185650]: 2026-01-27 23:01:30.428 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:31 compute-0 openstack_network_exporter[204648]: ERROR   23:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:01:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:01:31 compute-0 openstack_network_exporter[204648]: ERROR   23:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:01:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:01:32 compute-0 nova_compute[185650]: 2026-01-27 23:01:32.403 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:34 compute-0 sshd-session[247050]: Received disconnect from 45.148.10.147 port 42194:11:  [preauth]
Jan 27 23:01:34 compute-0 sshd-session[247050]: Disconnected from authenticating user root 45.148.10.147 port 42194 [preauth]
Jan 27 23:01:35 compute-0 nova_compute[185650]: 2026-01-27 23:01:35.432 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:36 compute-0 podman[247052]: 2026-01-27 23:01:36.386862735 +0000 UTC m=+0.080293720 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 23:01:37 compute-0 nova_compute[185650]: 2026-01-27 23:01:37.407 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.110 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.110 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b1e1e20>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:01:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:01:39 compute-0 podman[247075]: 2026-01-27 23:01:39.373994121 +0000 UTC m=+0.071880802 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, config_id=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public)
Jan 27 23:01:40 compute-0 nova_compute[185650]: 2026-01-27 23:01:40.434 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:42 compute-0 nova_compute[185650]: 2026-01-27 23:01:42.412 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:45 compute-0 nova_compute[185650]: 2026-01-27 23:01:45.435 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:47 compute-0 nova_compute[185650]: 2026-01-27 23:01:47.416 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:49 compute-0 podman[247093]: 2026-01-27 23:01:49.373138728 +0000 UTC m=+0.065851679 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 27 23:01:49 compute-0 podman[247094]: 2026-01-27 23:01:49.40729052 +0000 UTC m=+0.096861726 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2)
Jan 27 23:01:50 compute-0 nova_compute[185650]: 2026-01-27 23:01:50.439 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:52 compute-0 nova_compute[185650]: 2026-01-27 23:01:52.421 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:55 compute-0 podman[247130]: 2026-01-27 23:01:55.37062654 +0000 UTC m=+0.065777347 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 23:01:55 compute-0 nova_compute[185650]: 2026-01-27 23:01:55.440 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:57 compute-0 podman[247155]: 2026-01-27 23:01:57.380661211 +0000 UTC m=+0.076202598 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 23:01:57 compute-0 nova_compute[185650]: 2026-01-27 23:01:57.426 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:01:59 compute-0 podman[247175]: 2026-01-27 23:01:59.391170185 +0000 UTC m=+0.082032176 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_id=kepler, container_name=kepler, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Jan 27 23:01:59 compute-0 podman[247176]: 2026-01-27 23:01:59.413440176 +0000 UTC m=+0.103139675 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 27 23:01:59 compute-0 podman[201529]: time="2026-01-27T23:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:01:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 23:01:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3908 "" "Go-http-client/1.1"
Jan 27 23:02:00 compute-0 nova_compute[185650]: 2026-01-27 23:02:00.442 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:01 compute-0 openstack_network_exporter[204648]: ERROR   23:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:02:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:02:01 compute-0 openstack_network_exporter[204648]: ERROR   23:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:02:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:02:01 compute-0 nova_compute[185650]: 2026-01-27 23:02:01.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.022 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.022 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.023 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.023 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.379 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.380 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5388MB free_disk=72.41479110717773GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.381 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.381 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.429 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.451 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.452 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.473 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.490 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.492 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 23:02:02 compute-0 nova_compute[185650]: 2026-01-27 23:02:02.493 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:02:03 compute-0 nova_compute[185650]: 2026-01-27 23:02:03.493 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:02:03 compute-0 nova_compute[185650]: 2026-01-27 23:02:03.494 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 23:02:03 compute-0 nova_compute[185650]: 2026-01-27 23:02:03.494 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 23:02:03 compute-0 nova_compute[185650]: 2026-01-27 23:02:03.508 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 23:02:03 compute-0 nova_compute[185650]: 2026-01-27 23:02:03.509 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:02:03 compute-0 nova_compute[185650]: 2026-01-27 23:02:03.509 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 23:02:03 compute-0 nova_compute[185650]: 2026-01-27 23:02:03.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:02:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:02:04.155 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:02:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:02:04.155 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:02:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:02:04.156 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:02:04 compute-0 nova_compute[185650]: 2026-01-27 23:02:04.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:02:05 compute-0 nova_compute[185650]: 2026-01-27 23:02:05.444 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:05 compute-0 nova_compute[185650]: 2026-01-27 23:02:05.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:02:05 compute-0 nova_compute[185650]: 2026-01-27 23:02:05.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:02:06 compute-0 nova_compute[185650]: 2026-01-27 23:02:06.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:02:07 compute-0 podman[247218]: 2026-01-27 23:02:07.361222045 +0000 UTC m=+0.064536614 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 23:02:07 compute-0 nova_compute[185650]: 2026-01-27 23:02:07.434 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:10 compute-0 podman[247242]: 2026-01-27 23:02:10.434899361 +0000 UTC m=+0.124710900 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 23:02:10 compute-0 nova_compute[185650]: 2026-01-27 23:02:10.447 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:12 compute-0 nova_compute[185650]: 2026-01-27 23:02:12.439 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:12 compute-0 nova_compute[185650]: 2026-01-27 23:02:12.988 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:02:15 compute-0 nova_compute[185650]: 2026-01-27 23:02:15.451 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:17 compute-0 nova_compute[185650]: 2026-01-27 23:02:17.444 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:20 compute-0 podman[247263]: 2026-01-27 23:02:20.422386852 +0000 UTC m=+0.105843069 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 27 23:02:20 compute-0 nova_compute[185650]: 2026-01-27 23:02:20.454 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:20 compute-0 podman[247264]: 2026-01-27 23:02:20.477913592 +0000 UTC m=+0.154641808 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 27 23:02:22 compute-0 nova_compute[185650]: 2026-01-27 23:02:22.450 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:25 compute-0 nova_compute[185650]: 2026-01-27 23:02:25.457 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:25 compute-0 podman[247301]: 2026-01-27 23:02:25.601376041 +0000 UTC m=+0.094915164 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 23:02:27 compute-0 nova_compute[185650]: 2026-01-27 23:02:27.457 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:28 compute-0 podman[247326]: 2026-01-27 23:02:28.446152443 +0000 UTC m=+0.135247673 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 23:02:29 compute-0 podman[201529]: time="2026-01-27T23:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:02:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 23:02:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3913 "" "Go-http-client/1.1"
Jan 27 23:02:30 compute-0 podman[247347]: 2026-01-27 23:02:30.442438784 +0000 UTC m=+0.135271115 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 27 23:02:30 compute-0 podman[247346]: 2026-01-27 23:02:30.444504739 +0000 UTC m=+0.134001599 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, name=ubi9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container)
Jan 27 23:02:30 compute-0 nova_compute[185650]: 2026-01-27 23:02:30.458 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:31 compute-0 openstack_network_exporter[204648]: ERROR   23:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:02:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:02:31 compute-0 openstack_network_exporter[204648]: ERROR   23:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:02:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:02:32 compute-0 nova_compute[185650]: 2026-01-27 23:02:32.462 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:35 compute-0 nova_compute[185650]: 2026-01-27 23:02:35.460 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:37 compute-0 nova_compute[185650]: 2026-01-27 23:02:37.467 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:38 compute-0 podman[247387]: 2026-01-27 23:02:38.378578764 +0000 UTC m=+0.081178170 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 23:02:40 compute-0 nova_compute[185650]: 2026-01-27 23:02:40.464 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:41 compute-0 podman[247410]: 2026-01-27 23:02:41.424730738 +0000 UTC m=+0.119635007 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container)
Jan 27 23:02:42 compute-0 nova_compute[185650]: 2026-01-27 23:02:42.472 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:45 compute-0 nova_compute[185650]: 2026-01-27 23:02:45.467 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:47 compute-0 nova_compute[185650]: 2026-01-27 23:02:47.478 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:50 compute-0 nova_compute[185650]: 2026-01-27 23:02:50.469 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:51 compute-0 podman[247432]: 2026-01-27 23:02:51.408950719 +0000 UTC m=+0.099188336 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 27 23:02:51 compute-0 podman[247433]: 2026-01-27 23:02:51.41268367 +0000 UTC m=+0.102049472 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, io.buildah.version=1.41.4)
Jan 27 23:02:52 compute-0 nova_compute[185650]: 2026-01-27 23:02:52.483 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:55 compute-0 nova_compute[185650]: 2026-01-27 23:02:55.471 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:56 compute-0 podman[247469]: 2026-01-27 23:02:56.362367265 +0000 UTC m=+0.061784672 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 23:02:57 compute-0 nova_compute[185650]: 2026-01-27 23:02:57.486 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:02:59 compute-0 podman[247493]: 2026-01-27 23:02:59.397614535 +0000 UTC m=+0.085497799 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 27 23:02:59 compute-0 podman[201529]: time="2026-01-27T23:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:02:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 23:02:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3906 "" "Go-http-client/1.1"
Jan 27 23:03:00 compute-0 nova_compute[185650]: 2026-01-27 23:03:00.474 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:01 compute-0 podman[247513]: 2026-01-27 23:03:01.41194762 +0000 UTC m=+0.112588046 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 27 23:03:01 compute-0 openstack_network_exporter[204648]: ERROR   23:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:03:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:03:01 compute-0 openstack_network_exporter[204648]: ERROR   23:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:03:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:03:01 compute-0 podman[247512]: 2026-01-27 23:03:01.430884153 +0000 UTC m=+0.135516349 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, container_name=kepler, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 27 23:03:02 compute-0 nova_compute[185650]: 2026-01-27 23:03:02.490 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:02 compute-0 nova_compute[185650]: 2026-01-27 23:03:02.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.021 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.021 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.022 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.022 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.360 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.361 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5381MB free_disk=72.41479110717773GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.361 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.361 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.421 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.421 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.449 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.465 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.466 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 23:03:03 compute-0 nova_compute[185650]: 2026-01-27 23:03:03.467 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:03:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:03:04.157 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:03:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:03:04.157 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:03:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:03:04.157 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:03:04 compute-0 nova_compute[185650]: 2026-01-27 23:03:04.467 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:03:04 compute-0 nova_compute[185650]: 2026-01-27 23:03:04.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:03:04 compute-0 nova_compute[185650]: 2026-01-27 23:03:04.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 23:03:04 compute-0 nova_compute[185650]: 2026-01-27 23:03:04.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 23:03:05 compute-0 nova_compute[185650]: 2026-01-27 23:03:05.006 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 23:03:05 compute-0 nova_compute[185650]: 2026-01-27 23:03:05.007 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:03:05 compute-0 nova_compute[185650]: 2026-01-27 23:03:05.007 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:03:05 compute-0 nova_compute[185650]: 2026-01-27 23:03:05.007 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 23:03:05 compute-0 nova_compute[185650]: 2026-01-27 23:03:05.480 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:07 compute-0 nova_compute[185650]: 2026-01-27 23:03:07.492 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:07 compute-0 nova_compute[185650]: 2026-01-27 23:03:07.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:03:07 compute-0 nova_compute[185650]: 2026-01-27 23:03:07.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:03:07 compute-0 nova_compute[185650]: 2026-01-27 23:03:07.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:03:09 compute-0 podman[247556]: 2026-01-27 23:03:09.397503972 +0000 UTC m=+0.095208888 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 23:03:09 compute-0 nova_compute[185650]: 2026-01-27 23:03:09.990 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:03:10 compute-0 nova_compute[185650]: 2026-01-27 23:03:10.485 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:12 compute-0 podman[247579]: 2026-01-27 23:03:12.369477725 +0000 UTC m=+0.069508673 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Jan 27 23:03:12 compute-0 nova_compute[185650]: 2026-01-27 23:03:12.496 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:14 compute-0 nova_compute[185650]: 2026-01-27 23:03:14.006 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:03:15 compute-0 nova_compute[185650]: 2026-01-27 23:03:15.488 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:17 compute-0 nova_compute[185650]: 2026-01-27 23:03:17.500 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:20 compute-0 nova_compute[185650]: 2026-01-27 23:03:20.490 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:22 compute-0 podman[247601]: 2026-01-27 23:03:22.395117841 +0000 UTC m=+0.093928401 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 27 23:03:22 compute-0 podman[247602]: 2026-01-27 23:03:22.436740481 +0000 UTC m=+0.123376907 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true)
Jan 27 23:03:22 compute-0 nova_compute[185650]: 2026-01-27 23:03:22.504 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:25 compute-0 nova_compute[185650]: 2026-01-27 23:03:25.493 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:27 compute-0 podman[247639]: 2026-01-27 23:03:27.392154757 +0000 UTC m=+0.093221420 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 23:03:27 compute-0 nova_compute[185650]: 2026-01-27 23:03:27.509 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:29 compute-0 podman[201529]: time="2026-01-27T23:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:03:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 23:03:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3904 "" "Go-http-client/1.1"
Jan 27 23:03:30 compute-0 podman[247664]: 2026-01-27 23:03:30.378098088 +0000 UTC m=+0.075966456 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 23:03:30 compute-0 nova_compute[185650]: 2026-01-27 23:03:30.498 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:31 compute-0 openstack_network_exporter[204648]: ERROR   23:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:03:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:03:31 compute-0 openstack_network_exporter[204648]: ERROR   23:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:03:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:03:32 compute-0 podman[247684]: 2026-01-27 23:03:32.379528008 +0000 UTC m=+0.080531301 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., distribution-scope=public, release-0.7.12=, release=1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 27 23:03:32 compute-0 podman[247685]: 2026-01-27 23:03:32.439227617 +0000 UTC m=+0.133631733 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 27 23:03:32 compute-0 nova_compute[185650]: 2026-01-27 23:03:32.512 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:35 compute-0 nova_compute[185650]: 2026-01-27 23:03:35.501 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:37 compute-0 nova_compute[185650]: 2026-01-27 23:03:37.516 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.110 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.110 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.119 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.120 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.120 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.120 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826c627440>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.error': [], 'network.incoming.packets.drop': [], 'disk.device.write.latency': [], 'network.incoming.bytes': [], 'disk.device.write.requests': [], 'network.outgoing.packets.error': [], 'cpu': [], 'disk.ephemeral.size': [], 'memory.usage': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:03:38.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:03:40 compute-0 podman[247732]: 2026-01-27 23:03:40.399803275 +0000 UTC m=+0.095660479 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 23:03:40 compute-0 nova_compute[185650]: 2026-01-27 23:03:40.504 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:42 compute-0 nova_compute[185650]: 2026-01-27 23:03:42.520 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:43 compute-0 podman[247755]: 2026-01-27 23:03:43.414628853 +0000 UTC m=+0.105620223 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 27 23:03:45 compute-0 nova_compute[185650]: 2026-01-27 23:03:45.508 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:47 compute-0 nova_compute[185650]: 2026-01-27 23:03:47.524 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:50 compute-0 nova_compute[185650]: 2026-01-27 23:03:50.511 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:52 compute-0 nova_compute[185650]: 2026-01-27 23:03:52.528 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:53 compute-0 podman[247777]: 2026-01-27 23:03:53.410587528 +0000 UTC m=+0.098559576 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 27 23:03:53 compute-0 podman[247776]: 2026-01-27 23:03:53.432377056 +0000 UTC m=+0.118845743 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 27 23:03:55 compute-0 nova_compute[185650]: 2026-01-27 23:03:55.514 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:57 compute-0 nova_compute[185650]: 2026-01-27 23:03:57.532 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:03:58 compute-0 podman[247812]: 2026-01-27 23:03:58.369536246 +0000 UTC m=+0.070236624 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 23:03:59 compute-0 podman[201529]: time="2026-01-27T23:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:03:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 23:03:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Jan 27 23:04:00 compute-0 nova_compute[185650]: 2026-01-27 23:04:00.516 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:01 compute-0 podman[247836]: 2026-01-27 23:04:01.365635295 +0000 UTC m=+0.065151819 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 27 23:04:01 compute-0 openstack_network_exporter[204648]: ERROR   23:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:04:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:04:01 compute-0 openstack_network_exporter[204648]: ERROR   23:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:04:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:04:02 compute-0 nova_compute[185650]: 2026-01-27 23:04:02.535 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:02 compute-0 nova_compute[185650]: 2026-01-27 23:04:02.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.030 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.031 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.031 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.031 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 23:04:03 compute-0 podman[247855]: 2026-01-27 23:04:03.379178627 +0000 UTC m=+0.086547957 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler)
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.387 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.389 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5379MB free_disk=72.41479110717773GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.389 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.389 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:04:03 compute-0 podman[247856]: 2026-01-27 23:04:03.418532712 +0000 UTC m=+0.113508214 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.445 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.445 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.470 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.511 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.514 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 23:04:03 compute-0 nova_compute[185650]: 2026-01-27 23:04:03.515 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:04:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:04:04.158 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:04:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:04:04.159 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:04:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:04:04.159 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:04:05 compute-0 nova_compute[185650]: 2026-01-27 23:04:05.516 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:04:05 compute-0 nova_compute[185650]: 2026-01-27 23:04:05.516 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 23:04:05 compute-0 nova_compute[185650]: 2026-01-27 23:04:05.516 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 23:04:05 compute-0 nova_compute[185650]: 2026-01-27 23:04:05.519 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:05 compute-0 nova_compute[185650]: 2026-01-27 23:04:05.545 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 23:04:05 compute-0 nova_compute[185650]: 2026-01-27 23:04:05.545 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:04:05 compute-0 nova_compute[185650]: 2026-01-27 23:04:05.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:04:05 compute-0 nova_compute[185650]: 2026-01-27 23:04:05.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 23:04:06 compute-0 nova_compute[185650]: 2026-01-27 23:04:06.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:04:07 compute-0 nova_compute[185650]: 2026-01-27 23:04:07.539 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:07 compute-0 nova_compute[185650]: 2026-01-27 23:04:07.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:04:08 compute-0 nova_compute[185650]: 2026-01-27 23:04:08.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:04:09 compute-0 nova_compute[185650]: 2026-01-27 23:04:09.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:04:10 compute-0 nova_compute[185650]: 2026-01-27 23:04:10.522 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:11 compute-0 podman[247899]: 2026-01-27 23:04:11.389369548 +0000 UTC m=+0.094138788 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 27 23:04:12 compute-0 nova_compute[185650]: 2026-01-27 23:04:12.544 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:14 compute-0 podman[247922]: 2026-01-27 23:04:14.420476777 +0000 UTC m=+0.105170911 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter)
Jan 27 23:04:15 compute-0 nova_compute[185650]: 2026-01-27 23:04:15.523 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:15 compute-0 rsyslogd[235951]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 23:04:15 compute-0 rsyslogd[235951]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 27 23:04:15 compute-0 nova_compute[185650]: 2026-01-27 23:04:15.988 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:04:17 compute-0 nova_compute[185650]: 2026-01-27 23:04:17.548 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:20 compute-0 nova_compute[185650]: 2026-01-27 23:04:20.525 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:22 compute-0 nova_compute[185650]: 2026-01-27 23:04:22.554 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:24 compute-0 podman[247946]: 2026-01-27 23:04:24.422814473 +0000 UTC m=+0.107888874 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126)
Jan 27 23:04:24 compute-0 podman[247945]: 2026-01-27 23:04:24.43439089 +0000 UTC m=+0.119864071 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 27 23:04:25 compute-0 nova_compute[185650]: 2026-01-27 23:04:25.529 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:27 compute-0 nova_compute[185650]: 2026-01-27 23:04:27.559 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:29 compute-0 podman[247982]: 2026-01-27 23:04:29.372155736 +0000 UTC m=+0.070965555 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 27 23:04:29 compute-0 podman[201529]: time="2026-01-27T23:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:04:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 23:04:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3918 "" "Go-http-client/1.1"
Jan 27 23:04:30 compute-0 nova_compute[185650]: 2026-01-27 23:04:30.530 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:31 compute-0 openstack_network_exporter[204648]: ERROR   23:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:04:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:04:31 compute-0 openstack_network_exporter[204648]: ERROR   23:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:04:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:04:32 compute-0 podman[248006]: 2026-01-27 23:04:32.392029724 +0000 UTC m=+0.091287442 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi)
Jan 27 23:04:32 compute-0 nova_compute[185650]: 2026-01-27 23:04:32.563 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:34 compute-0 podman[248026]: 2026-01-27 23:04:34.397408919 +0000 UTC m=+0.098487323 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, build-date=2024-09-18T21:23:30, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=)
Jan 27 23:04:34 compute-0 podman[248027]: 2026-01-27 23:04:34.424280533 +0000 UTC m=+0.120670833 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 27 23:04:35 compute-0 nova_compute[185650]: 2026-01-27 23:04:35.532 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:37 compute-0 nova_compute[185650]: 2026-01-27 23:04:37.568 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:40 compute-0 nova_compute[185650]: 2026-01-27 23:04:40.535 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:41 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:04:41.104 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:41:f8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '26:ae:8e:b8:80:28'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:04:41 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:04:41.104 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 23:04:41 compute-0 nova_compute[185650]: 2026-01-27 23:04:41.105 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:42 compute-0 podman[248068]: 2026-01-27 23:04:42.400476788 +0000 UTC m=+0.091952414 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 23:04:42 compute-0 nova_compute[185650]: 2026-01-27 23:04:42.572 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:44 compute-0 podman[248090]: 2026-01-27 23:04:44.76609565 +0000 UTC m=+0.098971903 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Jan 27 23:04:45 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:04:45.107 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:04:45 compute-0 nova_compute[185650]: 2026-01-27 23:04:45.538 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:47 compute-0 nova_compute[185650]: 2026-01-27 23:04:47.577 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:50 compute-0 nova_compute[185650]: 2026-01-27 23:04:50.542 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:52 compute-0 nova_compute[185650]: 2026-01-27 23:04:52.581 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:55 compute-0 podman[248111]: 2026-01-27 23:04:55.409194902 +0000 UTC m=+0.100871663 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 23:04:55 compute-0 podman[248112]: 2026-01-27 23:04:55.412445739 +0000 UTC m=+0.101052098 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 23:04:55 compute-0 nova_compute[185650]: 2026-01-27 23:04:55.544 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:55 compute-0 nova_compute[185650]: 2026-01-27 23:04:55.579 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:04:57 compute-0 nova_compute[185650]: 2026-01-27 23:04:57.586 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:04:59 compute-0 podman[201529]: time="2026-01-27T23:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:04:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 23:04:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3919 "" "Go-http-client/1.1"
Jan 27 23:05:00 compute-0 podman[248150]: 2026-01-27 23:05:00.431327104 +0000 UTC m=+0.123749206 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 27 23:05:00 compute-0 nova_compute[185650]: 2026-01-27 23:05:00.546 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:01 compute-0 openstack_network_exporter[204648]: ERROR   23:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:05:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:05:01 compute-0 openstack_network_exporter[204648]: ERROR   23:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:05:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:05:02 compute-0 nova_compute[185650]: 2026-01-27 23:05:02.590 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:03 compute-0 nova_compute[185650]: 2026-01-27 23:05:02.999 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:03 compute-0 nova_compute[185650]: 2026-01-27 23:05:03.027 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:03 compute-0 nova_compute[185650]: 2026-01-27 23:05:03.028 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:03 compute-0 nova_compute[185650]: 2026-01-27 23:05:03.028 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:03 compute-0 nova_compute[185650]: 2026-01-27 23:05:03.028 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 23:05:03 compute-0 nova_compute[185650]: 2026-01-27 23:05:03.337 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:05:03 compute-0 nova_compute[185650]: 2026-01-27 23:05:03.338 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5371MB free_disk=72.41344833374023GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 23:05:03 compute-0 nova_compute[185650]: 2026-01-27 23:05:03.339 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:03 compute-0 nova_compute[185650]: 2026-01-27 23:05:03.339 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:03 compute-0 podman[248173]: 2026-01-27 23:05:03.413268075 +0000 UTC m=+0.110816660 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 23:05:03 compute-0 nova_compute[185650]: 2026-01-27 23:05:03.697 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 23:05:03 compute-0 nova_compute[185650]: 2026-01-27 23:05:03.697 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 23:05:04 compute-0 nova_compute[185650]: 2026-01-27 23:05:04.137 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing inventories for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 27 23:05:04 compute-0 nova_compute[185650]: 2026-01-27 23:05:04.160 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating ProviderTree inventory for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 27 23:05:04 compute-0 nova_compute[185650]: 2026-01-27 23:05:04.160 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Updating inventory in ProviderTree for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 27 23:05:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:04.160 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:04.160 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:04.161 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:04 compute-0 nova_compute[185650]: 2026-01-27 23:05:04.178 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing aggregate associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 27 23:05:04 compute-0 nova_compute[185650]: 2026-01-27 23:05:04.205 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Refreshing trait associations for resource provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_AVX,HW_CPU_X86_MMX,HW_CPU_X86_BMI,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 27 23:05:04 compute-0 nova_compute[185650]: 2026-01-27 23:05:04.243 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:05:04 compute-0 nova_compute[185650]: 2026-01-27 23:05:04.270 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:05:04 compute-0 nova_compute[185650]: 2026-01-27 23:05:04.272 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 23:05:04 compute-0 nova_compute[185650]: 2026-01-27 23:05:04.272 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:05 compute-0 podman[248193]: 2026-01-27 23:05:05.38900653 +0000 UTC m=+0.084052052 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Jan 27 23:05:05 compute-0 podman[248194]: 2026-01-27 23:05:05.430257856 +0000 UTC m=+0.129998853 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 27 23:05:05 compute-0 nova_compute[185650]: 2026-01-27 23:05:05.550 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:05 compute-0 nova_compute[185650]: 2026-01-27 23:05:05.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:05 compute-0 nova_compute[185650]: 2026-01-27 23:05:05.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 23:05:05 compute-0 nova_compute[185650]: 2026-01-27 23:05:05.993 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 23:05:06 compute-0 nova_compute[185650]: 2026-01-27 23:05:06.005 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 27 23:05:06 compute-0 nova_compute[185650]: 2026-01-27 23:05:06.005 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:07 compute-0 nova_compute[185650]: 2026-01-27 23:05:07.000 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:07 compute-0 nova_compute[185650]: 2026-01-27 23:05:07.001 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:07 compute-0 nova_compute[185650]: 2026-01-27 23:05:07.001 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 23:05:07 compute-0 nova_compute[185650]: 2026-01-27 23:05:07.592 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:07 compute-0 nova_compute[185650]: 2026-01-27 23:05:07.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:08 compute-0 nova_compute[185650]: 2026-01-27 23:05:08.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:09 compute-0 nova_compute[185650]: 2026-01-27 23:05:09.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:09 compute-0 nova_compute[185650]: 2026-01-27 23:05:09.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:10 compute-0 nova_compute[185650]: 2026-01-27 23:05:10.554 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:11 compute-0 ovn_controller[98048]: 2026-01-27T23:05:11Z|00065|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 27 23:05:12 compute-0 nova_compute[185650]: 2026-01-27 23:05:12.595 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:12 compute-0 nova_compute[185650]: 2026-01-27 23:05:12.988 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:13 compute-0 podman[248238]: 2026-01-27 23:05:13.395321114 +0000 UTC m=+0.097723979 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 23:05:15 compute-0 podman[248262]: 2026-01-27 23:05:15.385641732 +0000 UTC m=+0.093340593 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_id=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Jan 27 23:05:15 compute-0 nova_compute[185650]: 2026-01-27 23:05:15.555 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:16 compute-0 nova_compute[185650]: 2026-01-27 23:05:16.008 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:16 compute-0 nova_compute[185650]: 2026-01-27 23:05:16.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:16 compute-0 nova_compute[185650]: 2026-01-27 23:05:16.995 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 27 23:05:17 compute-0 nova_compute[185650]: 2026-01-27 23:05:17.557 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:17 compute-0 nova_compute[185650]: 2026-01-27 23:05:17.599 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:18 compute-0 nova_compute[185650]: 2026-01-27 23:05:18.994 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:05:18 compute-0 nova_compute[185650]: 2026-01-27 23:05:18.994 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 27 23:05:18 compute-0 nova_compute[185650]: 2026-01-27 23:05:18.995 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:19 compute-0 nova_compute[185650]: 2026-01-27 23:05:19.017 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 27 23:05:20 compute-0 nova_compute[185650]: 2026-01-27 23:05:20.488 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:20 compute-0 nova_compute[185650]: 2026-01-27 23:05:20.534 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:20 compute-0 nova_compute[185650]: 2026-01-27 23:05:20.557 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:21 compute-0 nova_compute[185650]: 2026-01-27 23:05:21.266 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:22 compute-0 nova_compute[185650]: 2026-01-27 23:05:22.603 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:25 compute-0 nova_compute[185650]: 2026-01-27 23:05:25.560 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:25 compute-0 podman[248284]: 2026-01-27 23:05:25.662913549 +0000 UTC m=+0.064250012 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 27 23:05:25 compute-0 podman[248283]: 2026-01-27 23:05:25.667484602 +0000 UTC m=+0.083650512 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Jan 27 23:05:26 compute-0 nova_compute[185650]: 2026-01-27 23:05:26.925 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:27 compute-0 nova_compute[185650]: 2026-01-27 23:05:27.607 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:28 compute-0 nova_compute[185650]: 2026-01-27 23:05:28.137 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:29 compute-0 podman[201529]: time="2026-01-27T23:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:05:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 27 23:05:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3903 "" "Go-http-client/1.1"
Jan 27 23:05:30 compute-0 nova_compute[185650]: 2026-01-27 23:05:30.062 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:30 compute-0 nova_compute[185650]: 2026-01-27 23:05:30.095 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:30 compute-0 nova_compute[185650]: 2026-01-27 23:05:30.562 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:30 compute-0 nova_compute[185650]: 2026-01-27 23:05:30.805 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:31 compute-0 podman[248321]: 2026-01-27 23:05:31.378027798 +0000 UTC m=+0.084046932 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 23:05:31 compute-0 openstack_network_exporter[204648]: ERROR   23:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:05:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:05:31 compute-0 openstack_network_exporter[204648]: ERROR   23:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:05:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:05:32 compute-0 nova_compute[185650]: 2026-01-27 23:05:32.611 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:34 compute-0 podman[248345]: 2026-01-27 23:05:34.393194978 +0000 UTC m=+0.085624175 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 27 23:05:35 compute-0 nova_compute[185650]: 2026-01-27 23:05:35.205 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:35 compute-0 nova_compute[185650]: 2026-01-27 23:05:35.565 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:36 compute-0 podman[248365]: 2026-01-27 23:05:36.413290772 +0000 UTC m=+0.101763516 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Jan 27 23:05:36 compute-0 podman[248366]: 2026-01-27 23:05:36.478261112 +0000 UTC m=+0.169033658 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 27 23:05:37 compute-0 nova_compute[185650]: 2026-01-27 23:05:37.615 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.112 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.113 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826b400e30>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:05:38.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.543 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Acquiring lock "66eb7f87-9511-4da7-8733-ef0673cfab67" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.543 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "66eb7f87-9511-4da7-8733-ef0673cfab67" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.558 185654 DEBUG nova.compute.manager [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.678 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.679 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.695 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.695 185654 INFO nova.compute.claims [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Claim successful on node compute-0.ctlplane.example.com
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.815 185654 DEBUG nova.compute.provider_tree [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.832 185654 DEBUG nova.scheduler.client.report [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.851 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.852 185654 DEBUG nova.compute.manager [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.904 185654 DEBUG nova.compute.manager [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.906 185654 DEBUG nova.network.neutron [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.928 185654 INFO nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 23:05:38 compute-0 nova_compute[185650]: 2026-01-27 23:05:38.949 185654 DEBUG nova.compute.manager [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.080 185654 DEBUG nova.compute.manager [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.082 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.083 185654 INFO nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Creating image(s)
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.085 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Acquiring lock "/var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.086 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "/var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.087 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "/var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.088 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Acquiring lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.090 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.283 185654 DEBUG nova.policy [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4ed42d6c691545f987cae97bc62b185c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '270690dca2514a49843b866111c87d39', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.472 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Acquiring lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.472 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.496 185654 DEBUG nova.compute.manager [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.570 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.571 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.578 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.578 185654 INFO nova.compute.claims [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Claim successful on node compute-0.ctlplane.example.com
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.688 185654 DEBUG nova.compute.provider_tree [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.702 185654 DEBUG nova.scheduler.client.report [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.743 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.744 185654 DEBUG nova.compute.manager [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.810 185654 DEBUG nova.compute.manager [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.810 185654 DEBUG nova.network.neutron [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.831 185654 INFO nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.853 185654 DEBUG nova.compute.manager [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.958 185654 DEBUG nova.compute.manager [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.960 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.961 185654 INFO nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Creating image(s)
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.962 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Acquiring lock "/var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.962 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "/var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.963 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "/var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:39 compute-0 nova_compute[185650]: 2026-01-27 23:05:39.964 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Acquiring lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:40 compute-0 nova_compute[185650]: 2026-01-27 23:05:40.569 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:40 compute-0 nova_compute[185650]: 2026-01-27 23:05:40.643 185654 DEBUG nova.policy [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39e9f4625e8b494b9682d5622bf1b206', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74f54dfa359341ba8894a95865378d18', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 23:05:40 compute-0 nova_compute[185650]: 2026-01-27 23:05:40.957 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.017 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.017 185654 DEBUG nova.virt.images [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] 319632d9-1bdd-4de0-b1d2-0507a3e91b6b was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.019 185654 DEBUG nova.privsep.utils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.019 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c.part /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.235 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c.part /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c.converted" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.242 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.325 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c.converted --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.327 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.355 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 1.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.356 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.376 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.398 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.451 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.452 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Acquiring lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.453 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.464 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.481 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.483 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Acquiring lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.527 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.529 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:41.585 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:41:f8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '26:ae:8e:b8:80:28'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:05:41 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:41.586 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.588 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk 1073741824" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.590 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.590 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.603 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.605 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.616 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.647 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.648 185654 DEBUG nova.virt.disk.api [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Checking if we can resize image /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.648 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.673 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.674 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.707 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.709 185654 DEBUG nova.virt.disk.api [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Cannot resize image /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.709 185654 DEBUG nova.objects.instance [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lazy-loading 'migration_context' on Instance uuid 66eb7f87-9511-4da7-8733-ef0673cfab67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.716 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.717 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.717 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.736 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.737 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Ensure instance console log exists: /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.738 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.738 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.739 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.782 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.783 185654 DEBUG nova.virt.disk.api [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Checking if we can resize image /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.784 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.844 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.845 185654 DEBUG nova.virt.disk.api [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Cannot resize image /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.845 185654 DEBUG nova.objects.instance [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lazy-loading 'migration_context' on Instance uuid 9033d5a6-ab60-43e3-bbcb-3a8b83161c58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.860 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.861 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Ensure instance console log exists: /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.861 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.862 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:41 compute-0 nova_compute[185650]: 2026-01-27 23:05:41.862 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:42 compute-0 nova_compute[185650]: 2026-01-27 23:05:42.248 185654 DEBUG nova.network.neutron [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Successfully created port: 64b86a6b-6de4-4fee-917e-229794042e8e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 23:05:42 compute-0 nova_compute[185650]: 2026-01-27 23:05:42.622 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:42 compute-0 nova_compute[185650]: 2026-01-27 23:05:42.624 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Acquiring lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:42 compute-0 nova_compute[185650]: 2026-01-27 23:05:42.624 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:42 compute-0 nova_compute[185650]: 2026-01-27 23:05:42.647 185654 DEBUG nova.compute.manager [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 23:05:42 compute-0 nova_compute[185650]: 2026-01-27 23:05:42.744 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:42 compute-0 nova_compute[185650]: 2026-01-27 23:05:42.745 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:42 compute-0 nova_compute[185650]: 2026-01-27 23:05:42.756 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 23:05:42 compute-0 nova_compute[185650]: 2026-01-27 23:05:42.757 185654 INFO nova.compute.claims [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Claim successful on node compute-0.ctlplane.example.com
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.254 185654 DEBUG nova.compute.provider_tree [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.284 185654 DEBUG nova.scheduler.client.report [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.318 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.319 185654 DEBUG nova.compute.manager [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.369 185654 DEBUG nova.compute.manager [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.370 185654 DEBUG nova.network.neutron [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.393 185654 INFO nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.421 185654 DEBUG nova.network.neutron [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Successfully created port: 5c31fe8e-f952-4e71-b32a-ec4759a7fc07 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.427 185654 DEBUG nova.compute.manager [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.507 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Acquiring lock "a5213d25-e31d-4018-991a-ffcc9a3cf495" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.508 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.533 185654 DEBUG nova.compute.manager [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.534 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.534 185654 INFO nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Creating image(s)
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.535 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Acquiring lock "/var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.536 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "/var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.536 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "/var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.549 185654 DEBUG nova.compute.manager [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.552 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.615 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.616 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Acquiring lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.617 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.634 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.672 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.673 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.681 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.681 185654 INFO nova.compute.claims [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Claim successful on node compute-0.ctlplane.example.com
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.700 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.701 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.756 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk 1073741824" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.757 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.758 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.778 185654 DEBUG nova.policy [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b661812adddc45d4beba73ca32253b11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '99d030bedd674ca8aef409ccc5f31fd2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.836 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.837 185654 DEBUG nova.virt.disk.api [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Checking if we can resize image /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.838 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.905 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.906 185654 DEBUG nova.virt.disk.api [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Cannot resize image /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.907 185654 DEBUG nova.objects.instance [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lazy-loading 'migration_context' on Instance uuid 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.929 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.930 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Ensure instance console log exists: /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.930 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.931 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.931 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.945 185654 DEBUG nova.compute.provider_tree [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:05:43 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.967 185654 DEBUG nova.scheduler.client.report [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:43.999 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.000 185654 DEBUG nova.compute.manager [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.052 185654 DEBUG nova.compute.manager [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.053 185654 DEBUG nova.network.neutron [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.069 185654 INFO nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.091 185654 DEBUG nova.compute.manager [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.170 185654 DEBUG nova.compute.manager [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.171 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.172 185654 INFO nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Creating image(s)
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.172 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Acquiring lock "/var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.173 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "/var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.174 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "/var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.186 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.243 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.244 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Acquiring lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.245 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.256 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.313 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.314 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:44 compute-0 podman[248471]: 2026-01-27 23:05:44.366139132 +0000 UTC m=+0.067559811 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.374 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk 1073741824" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.375 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.375 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.433 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.434 185654 DEBUG nova.virt.disk.api [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Checking if we can resize image /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.434 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.509 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.510 185654 DEBUG nova.virt.disk.api [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Cannot resize image /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.511 185654 DEBUG nova.objects.instance [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lazy-loading 'migration_context' on Instance uuid a5213d25-e31d-4018-991a-ffcc9a3cf495 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.532 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.532 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Ensure instance console log exists: /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.533 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.533 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.533 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:44 compute-0 nova_compute[185650]: 2026-01-27 23:05:44.657 185654 DEBUG nova.policy [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97de12b7dcf64c95a6ef85a1de71a992', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1841b657d00c42cba8cf6368908d3e05', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 23:05:45 compute-0 nova_compute[185650]: 2026-01-27 23:05:45.251 185654 DEBUG nova.network.neutron [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Successfully created port: 063f8734-c708-4ac4-90bf-5a2100f150c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 23:05:45 compute-0 nova_compute[185650]: 2026-01-27 23:05:45.480 185654 DEBUG nova.network.neutron [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Successfully updated port: 64b86a6b-6de4-4fee-917e-229794042e8e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 23:05:45 compute-0 nova_compute[185650]: 2026-01-27 23:05:45.500 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Acquiring lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:05:45 compute-0 nova_compute[185650]: 2026-01-27 23:05:45.500 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Acquired lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:05:45 compute-0 nova_compute[185650]: 2026-01-27 23:05:45.500 185654 DEBUG nova.network.neutron [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 23:05:45 compute-0 nova_compute[185650]: 2026-01-27 23:05:45.570 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:45 compute-0 nova_compute[185650]: 2026-01-27 23:05:45.780 185654 DEBUG nova.network.neutron [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 23:05:46 compute-0 podman[248508]: 2026-01-27 23:05:46.387179862 +0000 UTC m=+0.092260928 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, config_id=openstack_network_exporter, release=1755695350, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Jan 27 23:05:46 compute-0 nova_compute[185650]: 2026-01-27 23:05:46.531 185654 DEBUG nova.network.neutron [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Successfully updated port: 5c31fe8e-f952-4e71-b32a-ec4759a7fc07 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 23:05:46 compute-0 nova_compute[185650]: 2026-01-27 23:05:46.553 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Acquiring lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:05:46 compute-0 nova_compute[185650]: 2026-01-27 23:05:46.554 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Acquired lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:05:46 compute-0 nova_compute[185650]: 2026-01-27 23:05:46.554 185654 DEBUG nova.network.neutron [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 23:05:46 compute-0 nova_compute[185650]: 2026-01-27 23:05:46.721 185654 DEBUG nova.network.neutron [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Successfully created port: 09ecb7c4-8334-4e9d-8fbc-d238d1a73476 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 23:05:46 compute-0 nova_compute[185650]: 2026-01-27 23:05:46.796 185654 DEBUG nova.compute.manager [req-30e5d05d-a821-482c-9b87-8158d3f70ca6 req-fa56a4e0-c76f-4a2f-a246-f4c675ea38c4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Received event network-changed-64b86a6b-6de4-4fee-917e-229794042e8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:05:46 compute-0 nova_compute[185650]: 2026-01-27 23:05:46.796 185654 DEBUG nova.compute.manager [req-30e5d05d-a821-482c-9b87-8158d3f70ca6 req-fa56a4e0-c76f-4a2f-a246-f4c675ea38c4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Refreshing instance network info cache due to event network-changed-64b86a6b-6de4-4fee-917e-229794042e8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 23:05:46 compute-0 nova_compute[185650]: 2026-01-27 23:05:46.796 185654 DEBUG oslo_concurrency.lockutils [req-30e5d05d-a821-482c-9b87-8158d3f70ca6 req-fa56a4e0-c76f-4a2f-a246-f4c675ea38c4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:05:47 compute-0 nova_compute[185650]: 2026-01-27 23:05:47.396 185654 DEBUG nova.network.neutron [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Successfully updated port: 063f8734-c708-4ac4-90bf-5a2100f150c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 23:05:47 compute-0 nova_compute[185650]: 2026-01-27 23:05:47.414 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Acquiring lock "refresh_cache-92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:05:47 compute-0 nova_compute[185650]: 2026-01-27 23:05:47.414 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Acquired lock "refresh_cache-92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:05:47 compute-0 nova_compute[185650]: 2026-01-27 23:05:47.414 185654 DEBUG nova.network.neutron [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 23:05:47 compute-0 nova_compute[185650]: 2026-01-27 23:05:47.457 185654 DEBUG nova.network.neutron [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 23:05:47 compute-0 nova_compute[185650]: 2026-01-27 23:05:47.606 185654 DEBUG nova.network.neutron [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 23:05:47 compute-0 nova_compute[185650]: 2026-01-27 23:05:47.627 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:48 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:48.589 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.628 185654 DEBUG nova.network.neutron [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Updating instance_info_cache with network_info: [{"id": "64b86a6b-6de4-4fee-917e-229794042e8e", "address": "fa:16:3e:23:60:c6", "network": {"id": "6d0f9d9e-8cd6-4a68-8926-de88e69f60d4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1504245290-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "270690dca2514a49843b866111c87d39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b86a6b-6d", "ovs_interfaceid": "64b86a6b-6de4-4fee-917e-229794042e8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.648 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Releasing lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.649 185654 DEBUG nova.compute.manager [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Instance network_info: |[{"id": "64b86a6b-6de4-4fee-917e-229794042e8e", "address": "fa:16:3e:23:60:c6", "network": {"id": "6d0f9d9e-8cd6-4a68-8926-de88e69f60d4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1504245290-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "270690dca2514a49843b866111c87d39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b86a6b-6d", "ovs_interfaceid": "64b86a6b-6de4-4fee-917e-229794042e8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.650 185654 DEBUG oslo_concurrency.lockutils [req-30e5d05d-a821-482c-9b87-8158d3f70ca6 req-fa56a4e0-c76f-4a2f-a246-f4c675ea38c4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.650 185654 DEBUG nova.network.neutron [req-30e5d05d-a821-482c-9b87-8158d3f70ca6 req-fa56a4e0-c76f-4a2f-a246-f4c675ea38c4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Refreshing network info cache for port 64b86a6b-6de4-4fee-917e-229794042e8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.653 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Start _get_guest_xml network_info=[{"id": "64b86a6b-6de4-4fee-917e-229794042e8e", "address": "fa:16:3e:23:60:c6", "network": {"id": "6d0f9d9e-8cd6-4a68-8926-de88e69f60d4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1504245290-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "270690dca2514a49843b866111c87d39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b86a6b-6d", "ovs_interfaceid": "64b86a6b-6de4-4fee-917e-229794042e8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T23:04:44Z,direct_url=<?>,disk_format='qcow2',id=319632d9-1bdd-4de0-b1d2-0507a3e91b6b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T23:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '319632d9-1bdd-4de0-b1d2-0507a3e91b6b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.659 185654 WARNING nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.671 185654 DEBUG nova.virt.libvirt.host [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.672 185654 DEBUG nova.virt.libvirt.host [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.677 185654 DEBUG nova.virt.libvirt.host [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.678 185654 DEBUG nova.virt.libvirt.host [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.678 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.678 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T23:04:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d732a0b9-79cd-4ff7-8741-11ae188a8b69',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T23:04:44Z,direct_url=<?>,disk_format='qcow2',id=319632d9-1bdd-4de0-b1d2-0507a3e91b6b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T23:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.679 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.679 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.679 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.680 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.680 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.680 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.681 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.681 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.681 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.681 185654 DEBUG nova.virt.hardware [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.686 185654 DEBUG nova.virt.libvirt.vif [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T23:05:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-817207074',display_name='tempest-ServerActionsTestJSON-server-817207074',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-817207074',id=6,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEr+Jgo7+m7vAopHr6s/6R6KnzKo1C14Mm5MYuNhYtJRRKSD9j8tUhrT7bkccs/+olHT/vH7VaJMmDYY2Sz5Hj9CoIfuGK6Ucq92W8x4q+UMgrejyCtDqYr3p5PosRYEcQ==',key_name='tempest-keypair-1751861480',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='270690dca2514a49843b866111c87d39',ramdisk_id='',reservation_id='r-7bwij9wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1464685127',owner_user_name='tempest-ServerActionsTestJSON-1464685127-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T23:05:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4ed42d6c691545f987cae97bc62b185c',uuid=66eb7f87-9511-4da7-8733-ef0673cfab67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64b86a6b-6de4-4fee-917e-229794042e8e", "address": "fa:16:3e:23:60:c6", "network": {"id": "6d0f9d9e-8cd6-4a68-8926-de88e69f60d4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1504245290-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "270690dca2514a49843b866111c87d39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b86a6b-6d", "ovs_interfaceid": "64b86a6b-6de4-4fee-917e-229794042e8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.686 185654 DEBUG nova.network.os_vif_util [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Converting VIF {"id": "64b86a6b-6de4-4fee-917e-229794042e8e", "address": "fa:16:3e:23:60:c6", "network": {"id": "6d0f9d9e-8cd6-4a68-8926-de88e69f60d4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1504245290-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "270690dca2514a49843b866111c87d39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b86a6b-6d", "ovs_interfaceid": "64b86a6b-6de4-4fee-917e-229794042e8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.687 185654 DEBUG nova.network.os_vif_util [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:60:c6,bridge_name='br-int',has_traffic_filtering=True,id=64b86a6b-6de4-4fee-917e-229794042e8e,network=Network(6d0f9d9e-8cd6-4a68-8926-de88e69f60d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64b86a6b-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.688 185654 DEBUG nova.objects.instance [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66eb7f87-9511-4da7-8733-ef0673cfab67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.710 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] End _get_guest_xml xml=<domain type="kvm">
Jan 27 23:05:48 compute-0 nova_compute[185650]:   <uuid>66eb7f87-9511-4da7-8733-ef0673cfab67</uuid>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   <name>instance-00000006</name>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   <memory>131072</memory>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   <vcpu>1</vcpu>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   <metadata>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <nova:name>tempest-ServerActionsTestJSON-server-817207074</nova:name>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <nova:creationTime>2026-01-27 23:05:48</nova:creationTime>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <nova:flavor name="m1.nano">
Jan 27 23:05:48 compute-0 nova_compute[185650]:         <nova:memory>128</nova:memory>
Jan 27 23:05:48 compute-0 nova_compute[185650]:         <nova:disk>1</nova:disk>
Jan 27 23:05:48 compute-0 nova_compute[185650]:         <nova:swap>0</nova:swap>
Jan 27 23:05:48 compute-0 nova_compute[185650]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 23:05:48 compute-0 nova_compute[185650]:         <nova:vcpus>1</nova:vcpus>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       </nova:flavor>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <nova:owner>
Jan 27 23:05:48 compute-0 nova_compute[185650]:         <nova:user uuid="4ed42d6c691545f987cae97bc62b185c">tempest-ServerActionsTestJSON-1464685127-project-member</nova:user>
Jan 27 23:05:48 compute-0 nova_compute[185650]:         <nova:project uuid="270690dca2514a49843b866111c87d39">tempest-ServerActionsTestJSON-1464685127</nova:project>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       </nova:owner>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <nova:root type="image" uuid="319632d9-1bdd-4de0-b1d2-0507a3e91b6b"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <nova:ports>
Jan 27 23:05:48 compute-0 nova_compute[185650]:         <nova:port uuid="64b86a6b-6de4-4fee-917e-229794042e8e">
Jan 27 23:05:48 compute-0 nova_compute[185650]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:         </nova:port>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       </nova:ports>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     </nova:instance>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   </metadata>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   <sysinfo type="smbios">
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <system>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <entry name="manufacturer">RDO</entry>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <entry name="product">OpenStack Compute</entry>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <entry name="serial">66eb7f87-9511-4da7-8733-ef0673cfab67</entry>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <entry name="uuid">66eb7f87-9511-4da7-8733-ef0673cfab67</entry>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <entry name="family">Virtual Machine</entry>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     </system>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   </sysinfo>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   <os>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <boot dev="hd"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <smbios mode="sysinfo"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   </os>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   <features>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <acpi/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <apic/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <vmcoreinfo/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   </features>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   <clock offset="utc">
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <timer name="hpet" present="no"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   </clock>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   <cpu mode="host-model" match="exact">
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   </cpu>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   <devices>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <target dev="vda" bus="virtio"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     </disk>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <disk type="file" device="cdrom">
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk.config"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <target dev="sda" bus="sata"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     </disk>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <interface type="ethernet">
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <mac address="fa:16:3e:23:60:c6"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <mtu size="1442"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <target dev="tap64b86a6b-6d"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     </interface>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <serial type="pty">
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <log file="/var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/console.log" append="off"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     </serial>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <video>
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     </video>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <input type="tablet" bus="usb"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <rng model="virtio">
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <backend model="random">/dev/urandom</backend>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     </rng>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <controller type="usb" index="0"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     <memballoon model="virtio">
Jan 27 23:05:48 compute-0 nova_compute[185650]:       <stats period="10"/>
Jan 27 23:05:48 compute-0 nova_compute[185650]:     </memballoon>
Jan 27 23:05:48 compute-0 nova_compute[185650]:   </devices>
Jan 27 23:05:48 compute-0 nova_compute[185650]: </domain>
Jan 27 23:05:48 compute-0 nova_compute[185650]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.712 185654 DEBUG nova.compute.manager [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Preparing to wait for external event network-vif-plugged-64b86a6b-6de4-4fee-917e-229794042e8e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.712 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Acquiring lock "66eb7f87-9511-4da7-8733-ef0673cfab67-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.712 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "66eb7f87-9511-4da7-8733-ef0673cfab67-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.712 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "66eb7f87-9511-4da7-8733-ef0673cfab67-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.713 185654 DEBUG nova.virt.libvirt.vif [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T23:05:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-817207074',display_name='tempest-ServerActionsTestJSON-server-817207074',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-817207074',id=6,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEr+Jgo7+m7vAopHr6s/6R6KnzKo1C14Mm5MYuNhYtJRRKSD9j8tUhrT7bkccs/+olHT/vH7VaJMmDYY2Sz5Hj9CoIfuGK6Ucq92W8x4q+UMgrejyCtDqYr3p5PosRYEcQ==',key_name='tempest-keypair-1751861480',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='270690dca2514a49843b866111c87d39',ramdisk_id='',reservation_id='r-7bwij9wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1464685127',owner_user_name='tempest-ServerActionsTestJSON-1464685127-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T23:05:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4ed42d6c691545f987cae97bc62b185c',uuid=66eb7f87-9511-4da7-8733-ef0673cfab67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64b86a6b-6de4-4fee-917e-229794042e8e", "address": "fa:16:3e:23:60:c6", "network": {"id": "6d0f9d9e-8cd6-4a68-8926-de88e69f60d4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1504245290-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "270690dca2514a49843b866111c87d39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b86a6b-6d", "ovs_interfaceid": "64b86a6b-6de4-4fee-917e-229794042e8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.713 185654 DEBUG nova.network.os_vif_util [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Converting VIF {"id": "64b86a6b-6de4-4fee-917e-229794042e8e", "address": "fa:16:3e:23:60:c6", "network": {"id": "6d0f9d9e-8cd6-4a68-8926-de88e69f60d4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1504245290-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "270690dca2514a49843b866111c87d39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b86a6b-6d", "ovs_interfaceid": "64b86a6b-6de4-4fee-917e-229794042e8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.714 185654 DEBUG nova.network.os_vif_util [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:60:c6,bridge_name='br-int',has_traffic_filtering=True,id=64b86a6b-6de4-4fee-917e-229794042e8e,network=Network(6d0f9d9e-8cd6-4a68-8926-de88e69f60d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64b86a6b-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.715 185654 DEBUG os_vif [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:60:c6,bridge_name='br-int',has_traffic_filtering=True,id=64b86a6b-6de4-4fee-917e-229794042e8e,network=Network(6d0f9d9e-8cd6-4a68-8926-de88e69f60d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64b86a6b-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.715 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.716 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.716 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.719 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.720 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64b86a6b-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.720 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap64b86a6b-6d, col_values=(('external_ids', {'iface-id': '64b86a6b-6de4-4fee-917e-229794042e8e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:60:c6', 'vm-uuid': '66eb7f87-9511-4da7-8733-ef0673cfab67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.722 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:48 compute-0 NetworkManager[56600]: <info>  [1769555148.7233] manager: (tap64b86a6b-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.724 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.730 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.730 185654 INFO os_vif [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:60:c6,bridge_name='br-int',has_traffic_filtering=True,id=64b86a6b-6de4-4fee-917e-229794042e8e,network=Network(6d0f9d9e-8cd6-4a68-8926-de88e69f60d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64b86a6b-6d')
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.791 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.791 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.791 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] No VIF found with MAC fa:16:3e:23:60:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.792 185654 INFO nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Using config drive
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.840 185654 DEBUG nova.compute.manager [req-4aae8a2e-2b9c-435d-90ed-2f3530aaeaec req-9ffe50a4-02e0-40d9-8d92-9664c484cf42 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Received event network-changed-063f8734-c708-4ac4-90bf-5a2100f150c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.841 185654 DEBUG nova.compute.manager [req-4aae8a2e-2b9c-435d-90ed-2f3530aaeaec req-9ffe50a4-02e0-40d9-8d92-9664c484cf42 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Refreshing instance network info cache due to event network-changed-063f8734-c708-4ac4-90bf-5a2100f150c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 23:05:48 compute-0 nova_compute[185650]: 2026-01-27 23:05:48.841 185654 DEBUG oslo_concurrency.lockutils [req-4aae8a2e-2b9c-435d-90ed-2f3530aaeaec req-9ffe50a4-02e0-40d9-8d92-9664c484cf42 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.420 185654 DEBUG nova.network.neutron [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Updating instance_info_cache with network_info: [{"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.449 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Releasing lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.449 185654 DEBUG nova.compute.manager [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Instance network_info: |[{"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.451 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Start _get_guest_xml network_info=[{"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T23:04:44Z,direct_url=<?>,disk_format='qcow2',id=319632d9-1bdd-4de0-b1d2-0507a3e91b6b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T23:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '319632d9-1bdd-4de0-b1d2-0507a3e91b6b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.458 185654 WARNING nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.464 185654 DEBUG nova.virt.libvirt.host [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.465 185654 DEBUG nova.virt.libvirt.host [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.469 185654 DEBUG nova.virt.libvirt.host [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.470 185654 DEBUG nova.virt.libvirt.host [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.470 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.470 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T23:04:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d732a0b9-79cd-4ff7-8741-11ae188a8b69',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T23:04:44Z,direct_url=<?>,disk_format='qcow2',id=319632d9-1bdd-4de0-b1d2-0507a3e91b6b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T23:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.471 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.471 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.471 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.471 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.471 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.472 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.472 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.472 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.472 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.472 185654 DEBUG nova.virt.hardware [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.475 185654 DEBUG nova.virt.libvirt.vif [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T23:05:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1437890012',display_name='tempest-AttachInterfacesUnderV243Test-server-1437890012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1437890012',id=7,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGLWuwkylpGC2zv0hEhalRcJMiJCEjgASbVHpOgNuR/EeBqu3l0Om/P4jjwAekvdO7wiUERtZMF1ig5SH7SSsipksRO9x1b5Fh3Kg7nXz5Q90/jCZDzLnqarGF8fLbl4LA==',key_name='tempest-keypair-1401102305',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74f54dfa359341ba8894a95865378d18',ramdisk_id='',reservation_id='r-nakjiuxy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1154542967',owner_user_name='tempest-AttachInterfacesUnderV243Test-1154542967-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T23:05:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39e9f4625e8b494b9682d5622bf1b206',uuid=9033d5a6-ab60-43e3-bbcb-3a8b83161c58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.476 185654 DEBUG nova.network.os_vif_util [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Converting VIF {"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.476 185654 DEBUG nova.network.os_vif_util [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:28:a4,bridge_name='br-int',has_traffic_filtering=True,id=5c31fe8e-f952-4e71-b32a-ec4759a7fc07,network=Network(b56ee5fa-e690-4d9b-a6e1-7815589f421e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c31fe8e-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.477 185654 DEBUG nova.objects.instance [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9033d5a6-ab60-43e3-bbcb-3a8b83161c58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.500 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] End _get_guest_xml xml=<domain type="kvm">
Jan 27 23:05:49 compute-0 nova_compute[185650]:   <uuid>9033d5a6-ab60-43e3-bbcb-3a8b83161c58</uuid>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   <name>instance-00000007</name>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   <memory>131072</memory>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   <vcpu>1</vcpu>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   <metadata>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1437890012</nova:name>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <nova:creationTime>2026-01-27 23:05:49</nova:creationTime>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <nova:flavor name="m1.nano">
Jan 27 23:05:49 compute-0 nova_compute[185650]:         <nova:memory>128</nova:memory>
Jan 27 23:05:49 compute-0 nova_compute[185650]:         <nova:disk>1</nova:disk>
Jan 27 23:05:49 compute-0 nova_compute[185650]:         <nova:swap>0</nova:swap>
Jan 27 23:05:49 compute-0 nova_compute[185650]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 23:05:49 compute-0 nova_compute[185650]:         <nova:vcpus>1</nova:vcpus>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       </nova:flavor>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <nova:owner>
Jan 27 23:05:49 compute-0 nova_compute[185650]:         <nova:user uuid="39e9f4625e8b494b9682d5622bf1b206">tempest-AttachInterfacesUnderV243Test-1154542967-project-member</nova:user>
Jan 27 23:05:49 compute-0 nova_compute[185650]:         <nova:project uuid="74f54dfa359341ba8894a95865378d18">tempest-AttachInterfacesUnderV243Test-1154542967</nova:project>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       </nova:owner>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <nova:root type="image" uuid="319632d9-1bdd-4de0-b1d2-0507a3e91b6b"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <nova:ports>
Jan 27 23:05:49 compute-0 nova_compute[185650]:         <nova:port uuid="5c31fe8e-f952-4e71-b32a-ec4759a7fc07">
Jan 27 23:05:49 compute-0 nova_compute[185650]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:         </nova:port>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       </nova:ports>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     </nova:instance>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   </metadata>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   <sysinfo type="smbios">
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <system>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <entry name="manufacturer">RDO</entry>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <entry name="product">OpenStack Compute</entry>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <entry name="serial">9033d5a6-ab60-43e3-bbcb-3a8b83161c58</entry>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <entry name="uuid">9033d5a6-ab60-43e3-bbcb-3a8b83161c58</entry>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <entry name="family">Virtual Machine</entry>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     </system>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   </sysinfo>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   <os>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <boot dev="hd"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <smbios mode="sysinfo"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   </os>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   <features>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <acpi/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <apic/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <vmcoreinfo/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   </features>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   <clock offset="utc">
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <timer name="hpet" present="no"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   </clock>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   <cpu mode="host-model" match="exact">
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   </cpu>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   <devices>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <target dev="vda" bus="virtio"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     </disk>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <disk type="file" device="cdrom">
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.config"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <target dev="sda" bus="sata"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     </disk>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <interface type="ethernet">
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <mac address="fa:16:3e:81:28:a4"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <mtu size="1442"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <target dev="tap5c31fe8e-f9"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     </interface>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <serial type="pty">
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <log file="/var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/console.log" append="off"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     </serial>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <video>
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     </video>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <input type="tablet" bus="usb"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <rng model="virtio">
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <backend model="random">/dev/urandom</backend>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     </rng>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <controller type="usb" index="0"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     <memballoon model="virtio">
Jan 27 23:05:49 compute-0 nova_compute[185650]:       <stats period="10"/>
Jan 27 23:05:49 compute-0 nova_compute[185650]:     </memballoon>
Jan 27 23:05:49 compute-0 nova_compute[185650]:   </devices>
Jan 27 23:05:49 compute-0 nova_compute[185650]: </domain>
Jan 27 23:05:49 compute-0 nova_compute[185650]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.501 185654 DEBUG nova.compute.manager [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Preparing to wait for external event network-vif-plugged-5c31fe8e-f952-4e71-b32a-ec4759a7fc07 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.501 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Acquiring lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.502 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.502 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.503 185654 DEBUG nova.virt.libvirt.vif [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T23:05:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1437890012',display_name='tempest-AttachInterfacesUnderV243Test-server-1437890012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1437890012',id=7,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGLWuwkylpGC2zv0hEhalRcJMiJCEjgASbVHpOgNuR/EeBqu3l0Om/P4jjwAekvdO7wiUERtZMF1ig5SH7SSsipksRO9x1b5Fh3Kg7nXz5Q90/jCZDzLnqarGF8fLbl4LA==',key_name='tempest-keypair-1401102305',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74f54dfa359341ba8894a95865378d18',ramdisk_id='',reservation_id='r-nakjiuxy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1154542967',owner_user_name='tempest-AttachInterfacesUnderV243Test-1154542967-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T23:05:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39e9f4625e8b494b9682d5622bf1b206',uuid=9033d5a6-ab60-43e3-bbcb-3a8b83161c58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.504 185654 DEBUG nova.network.os_vif_util [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Converting VIF {"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.505 185654 DEBUG nova.network.os_vif_util [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:28:a4,bridge_name='br-int',has_traffic_filtering=True,id=5c31fe8e-f952-4e71-b32a-ec4759a7fc07,network=Network(b56ee5fa-e690-4d9b-a6e1-7815589f421e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c31fe8e-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.505 185654 DEBUG os_vif [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:28:a4,bridge_name='br-int',has_traffic_filtering=True,id=5c31fe8e-f952-4e71-b32a-ec4759a7fc07,network=Network(b56ee5fa-e690-4d9b-a6e1-7815589f421e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c31fe8e-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.506 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.506 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.507 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.510 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.511 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c31fe8e-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.511 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5c31fe8e-f9, col_values=(('external_ids', {'iface-id': '5c31fe8e-f952-4e71-b32a-ec4759a7fc07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:28:a4', 'vm-uuid': '9033d5a6-ab60-43e3-bbcb-3a8b83161c58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.513 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:49 compute-0 NetworkManager[56600]: <info>  [1769555149.5147] manager: (tap5c31fe8e-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.516 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.525 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.526 185654 INFO os_vif [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:28:a4,bridge_name='br-int',has_traffic_filtering=True,id=5c31fe8e-f952-4e71-b32a-ec4759a7fc07,network=Network(b56ee5fa-e690-4d9b-a6e1-7815589f421e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c31fe8e-f9')
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.563 185654 INFO nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Creating config drive at /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk.config
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.569 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3rk8949e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.630 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.631 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.631 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] No VIF found with MAC fa:16:3e:81:28:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.632 185654 INFO nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Using config drive
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.693 185654 DEBUG oslo_concurrency.processutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3rk8949e" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:49 compute-0 kernel: tap64b86a6b-6d: entered promiscuous mode
Jan 27 23:05:49 compute-0 NetworkManager[56600]: <info>  [1769555149.7562] manager: (tap64b86a6b-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.760 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:49 compute-0 ovn_controller[98048]: 2026-01-27T23:05:49Z|00066|binding|INFO|Claiming lport 64b86a6b-6de4-4fee-917e-229794042e8e for this chassis.
Jan 27 23:05:49 compute-0 ovn_controller[98048]: 2026-01-27T23:05:49Z|00067|binding|INFO|64b86a6b-6de4-4fee-917e-229794042e8e: Claiming fa:16:3e:23:60:c6 10.100.0.8
Jan 27 23:05:49 compute-0 ovn_controller[98048]: 2026-01-27T23:05:49Z|00068|binding|INFO|Setting lport 64b86a6b-6de4-4fee-917e-229794042e8e ovn-installed in OVS
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.784 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:49 compute-0 ovn_controller[98048]: 2026-01-27T23:05:49Z|00069|binding|INFO|Setting lport 64b86a6b-6de4-4fee-917e-229794042e8e up in Southbound
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.786 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:60:c6 10.100.0.8'], port_security=['fa:16:3e:23:60:c6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '66eb7f87-9511-4da7-8733-ef0673cfab67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '270690dca2514a49843b866111c87d39', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd240ad0-5b13-4363-a7fb-4df878909f19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7759edbc-df05-491d-832a-ee279677d2d9, chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=64b86a6b-6de4-4fee-917e-229794042e8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.788 107302 INFO neutron.agent.ovn.metadata.agent [-] Port 64b86a6b-6de4-4fee-917e-229794042e8e in datapath 6d0f9d9e-8cd6-4a68-8926-de88e69f60d4 bound to our chassis
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.790 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d0f9d9e-8cd6-4a68-8926-de88e69f60d4
Jan 27 23:05:49 compute-0 nova_compute[185650]: 2026-01-27 23:05:49.796 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:49 compute-0 systemd-udevd[248555]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 23:05:49 compute-0 systemd-machined[157036]: New machine qemu-6-instance-00000006.
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.803 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb4b661-7c65-47a3-be75-2854a65a38bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.804 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d0f9d9e-81 in ovnmeta-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.806 238735 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d0f9d9e-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.806 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[d5825607-fcf0-4dd5-afbc-3d83d1ddbbc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.807 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[987e3029-7b7d-40e3-8205-b907630d199e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 27 23:05:49 compute-0 NetworkManager[56600]: <info>  [1769555149.8185] device (tap64b86a6b-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 23:05:49 compute-0 NetworkManager[56600]: <info>  [1769555149.8216] device (tap64b86a6b-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.820 107797 DEBUG oslo.privsep.daemon [-] privsep: reply[aee26449-9e94-47bb-aaf4-f21fb0a36efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.848 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[69a7769a-f94d-44e4-844c-7e44749ef052]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.878 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[9a263558-9363-4d19-9b2e-10b278d8c20f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.884 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce76aa8-dbd7-4dff-9f3b-877eab39dba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 NetworkManager[56600]: <info>  [1769555149.8869] manager: (tap6d0f9d9e-80): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.916 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[ed36ecd7-3806-4c65-a27f-a9b40875596c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.919 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f8cd28-3499-409a-b8fa-a0aaf90bf41d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 NetworkManager[56600]: <info>  [1769555149.9448] device (tap6d0f9d9e-80): carrier: link connected
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.954 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[c1bcdeed-7a29-4a2a-b975-d581aed12f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.972 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[4df47ac6-fce8-4c23-aead-7f9b6eef8512]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d0f9d9e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:9c:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498153, 'reachable_time': 33734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248591, 'error': None, 'target': 'ovnmeta-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:49.991 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[c707766e-7962-402d-81cc-4909783556bd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:9c26'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498153, 'tstamp': 498153}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248592, 'error': None, 'target': 'ovnmeta-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:49 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:50.010 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[5835c3d8-6a97-4f90-b033-141a51ef94c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d0f9d9e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:9c:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498153, 'reachable_time': 33734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248594, 'error': None, 'target': 'ovnmeta-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.014 185654 DEBUG nova.network.neutron [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Updating instance_info_cache with network_info: [{"id": "063f8734-c708-4ac4-90bf-5a2100f150c8", "address": "fa:16:3e:c8:3b:73", "network": {"id": "ce133cd3-da57-40b5-95f2-7f015476df55", "bridge": "br-int", "label": "tempest-ServersTestJSON-441071384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99d030bedd674ca8aef409ccc5f31fd2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap063f8734-c7", "ovs_interfaceid": "063f8734-c708-4ac4-90bf-5a2100f150c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:05:50 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:50.053 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[5f58a837-1a17-4415-a7eb-5f37da2084f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.102 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Releasing lock "refresh_cache-92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.102 185654 DEBUG nova.compute.manager [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Instance network_info: |[{"id": "063f8734-c708-4ac4-90bf-5a2100f150c8", "address": "fa:16:3e:c8:3b:73", "network": {"id": "ce133cd3-da57-40b5-95f2-7f015476df55", "bridge": "br-int", "label": "tempest-ServersTestJSON-441071384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99d030bedd674ca8aef409ccc5f31fd2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap063f8734-c7", "ovs_interfaceid": "063f8734-c708-4ac4-90bf-5a2100f150c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.102 185654 DEBUG oslo_concurrency.lockutils [req-4aae8a2e-2b9c-435d-90ed-2f3530aaeaec req-9ffe50a4-02e0-40d9-8d92-9664c484cf42 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.102 185654 DEBUG nova.network.neutron [req-4aae8a2e-2b9c-435d-90ed-2f3530aaeaec req-9ffe50a4-02e0-40d9-8d92-9664c484cf42 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Refreshing network info cache for port 063f8734-c708-4ac4-90bf-5a2100f150c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.105 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Start _get_guest_xml network_info=[{"id": "063f8734-c708-4ac4-90bf-5a2100f150c8", "address": "fa:16:3e:c8:3b:73", "network": {"id": "ce133cd3-da57-40b5-95f2-7f015476df55", "bridge": "br-int", "label": "tempest-ServersTestJSON-441071384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99d030bedd674ca8aef409ccc5f31fd2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap063f8734-c7", "ovs_interfaceid": "063f8734-c708-4ac4-90bf-5a2100f150c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T23:04:44Z,direct_url=<?>,disk_format='qcow2',id=319632d9-1bdd-4de0-b1d2-0507a3e91b6b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T23:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '319632d9-1bdd-4de0-b1d2-0507a3e91b6b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.113 185654 WARNING nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.119 185654 DEBUG nova.virt.libvirt.host [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.119 185654 DEBUG nova.virt.libvirt.host [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.134 185654 DEBUG nova.virt.libvirt.host [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:50.135 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[d061ddf9-4f2a-4478-bb2f-124e72c627a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.136 185654 DEBUG nova.virt.libvirt.host [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:50.137 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d0f9d9e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:50.138 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.138 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:50.139 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d0f9d9e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.138 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T23:04:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d732a0b9-79cd-4ff7-8741-11ae188a8b69',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T23:04:44Z,direct_url=<?>,disk_format='qcow2',id=319632d9-1bdd-4de0-b1d2-0507a3e91b6b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T23:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.139 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.139 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.139 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.139 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.140 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 23:05:50 compute-0 kernel: tap6d0f9d9e-80: entered promiscuous mode
Jan 27 23:05:50 compute-0 NetworkManager[56600]: <info>  [1769555150.1434] manager: (tap6d0f9d9e-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.145 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.145 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.146 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.146 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.146 185654 DEBUG nova.virt.hardware [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:50.153 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d0f9d9e-80, col_values=(('external_ids', {'iface-id': 'babee362-409a-4d1f-bc47-c6a6dce734ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.153 185654 DEBUG nova.virt.libvirt.vif [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T23:05:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-295068544',display_name='tempest-ServersTestJSON-server-295068544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-295068544',id=8,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOduhWHyouHCtRXfH7MrLfcwd0dJphJOUMH0Qoms/901k0RmU1WUrglIpw5S6nBg+kWfRVhjfT3WaO1uhXYyDW7tFhwKehJxN/isuJfe7J5L2LEWwrpRzA11HbJZ3RMe8A==',key_name='tempest-keypair-139774848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='99d030bedd674ca8aef409ccc5f31fd2',ramdisk_id='',reservation_id='r-nysq9n2c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1401357921',owner_user_name='tempest-ServersTestJSON-1401357921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T23:05:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b661812adddc45d4beba73ca32253b11',uuid=92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "063f8734-c708-4ac4-90bf-5a2100f150c8", "address": "fa:16:3e:c8:3b:73", "network": {"id": "ce133cd3-da57-40b5-95f2-7f015476df55", "bridge": "br-int", "label": "tempest-ServersTestJSON-441071384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99d030bedd674ca8aef409ccc5f31fd2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap063f8734-c7", "ovs_interfaceid": "063f8734-c708-4ac4-90bf-5a2100f150c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.154 185654 DEBUG nova.network.os_vif_util [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Converting VIF {"id": "063f8734-c708-4ac4-90bf-5a2100f150c8", "address": "fa:16:3e:c8:3b:73", "network": {"id": "ce133cd3-da57-40b5-95f2-7f015476df55", "bridge": "br-int", "label": "tempest-ServersTestJSON-441071384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99d030bedd674ca8aef409ccc5f31fd2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap063f8734-c7", "ovs_interfaceid": "063f8734-c708-4ac4-90bf-5a2100f150c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.155 185654 DEBUG nova.network.os_vif_util [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:3b:73,bridge_name='br-int',has_traffic_filtering=True,id=063f8734-c708-4ac4-90bf-5a2100f150c8,network=Network(ce133cd3-da57-40b5-95f2-7f015476df55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap063f8734-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:05:50 compute-0 ovn_controller[98048]: 2026-01-27T23:05:50Z|00070|binding|INFO|Releasing lport babee362-409a-4d1f-bc47-c6a6dce734ff from this chassis (sb_readonly=0)
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.156 185654 DEBUG nova.objects.instance [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.158 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555150.1375282, 66eb7f87-9511-4da7-8733-ef0673cfab67 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.158 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] VM Started (Lifecycle Event)
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.164 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.168 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.173 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:50.174 107302 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d0f9d9e-8cd6-4a68-8926-de88e69f60d4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d0f9d9e-8cd6-4a68-8926-de88e69f60d4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:50.175 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[3ee7044b-9407-4d20-a19b-5b8280a2f145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:50.176 107302 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: global
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     log         /dev/log local0 debug
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     log-tag     haproxy-metadata-proxy-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     user        root
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     group       root
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     maxconn     1024
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     pidfile     /var/lib/neutron/external/pids/6d0f9d9e-8cd6-4a68-8926-de88e69f60d4.pid.haproxy
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     daemon
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: defaults
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     log global
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     mode http
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     option httplog
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     option dontlognull
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     option http-server-close
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     option forwardfor
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     retries                 3
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     timeout http-request    30s
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     timeout connect         30s
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     timeout client          32s
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     timeout server          32s
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     timeout http-keep-alive 30s
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: listen listener
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     bind 169.254.169.254:80
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:     http-request add-header X-OVN-Network-ID 6d0f9d9e-8cd6-4a68-8926-de88e69f60d4
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 23:05:50 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:50.177 107302 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4', 'env', 'PROCESS_TAG=haproxy-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d0f9d9e-8cd6-4a68-8926-de88e69f60d4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.186 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] End _get_guest_xml xml=<domain type="kvm">
Jan 27 23:05:50 compute-0 nova_compute[185650]:   <uuid>92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1</uuid>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   <name>instance-00000008</name>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   <memory>131072</memory>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   <vcpu>1</vcpu>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   <metadata>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <nova:name>tempest-ServersTestJSON-server-295068544</nova:name>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <nova:creationTime>2026-01-27 23:05:50</nova:creationTime>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <nova:flavor name="m1.nano">
Jan 27 23:05:50 compute-0 nova_compute[185650]:         <nova:memory>128</nova:memory>
Jan 27 23:05:50 compute-0 nova_compute[185650]:         <nova:disk>1</nova:disk>
Jan 27 23:05:50 compute-0 nova_compute[185650]:         <nova:swap>0</nova:swap>
Jan 27 23:05:50 compute-0 nova_compute[185650]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 23:05:50 compute-0 nova_compute[185650]:         <nova:vcpus>1</nova:vcpus>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       </nova:flavor>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <nova:owner>
Jan 27 23:05:50 compute-0 nova_compute[185650]:         <nova:user uuid="b661812adddc45d4beba73ca32253b11">tempest-ServersTestJSON-1401357921-project-member</nova:user>
Jan 27 23:05:50 compute-0 nova_compute[185650]:         <nova:project uuid="99d030bedd674ca8aef409ccc5f31fd2">tempest-ServersTestJSON-1401357921</nova:project>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       </nova:owner>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <nova:root type="image" uuid="319632d9-1bdd-4de0-b1d2-0507a3e91b6b"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <nova:ports>
Jan 27 23:05:50 compute-0 nova_compute[185650]:         <nova:port uuid="063f8734-c708-4ac4-90bf-5a2100f150c8">
Jan 27 23:05:50 compute-0 nova_compute[185650]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:         </nova:port>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       </nova:ports>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     </nova:instance>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   </metadata>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   <sysinfo type="smbios">
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <system>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <entry name="manufacturer">RDO</entry>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <entry name="product">OpenStack Compute</entry>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <entry name="serial">92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1</entry>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <entry name="uuid">92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1</entry>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <entry name="family">Virtual Machine</entry>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     </system>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   </sysinfo>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   <os>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <boot dev="hd"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <smbios mode="sysinfo"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   </os>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   <features>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <acpi/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <apic/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <vmcoreinfo/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   </features>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   <clock offset="utc">
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <timer name="hpet" present="no"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   </clock>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   <cpu mode="host-model" match="exact">
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   </cpu>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   <devices>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <target dev="vda" bus="virtio"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     </disk>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <disk type="file" device="cdrom">
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk.config"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <target dev="sda" bus="sata"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     </disk>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <interface type="ethernet">
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <mac address="fa:16:3e:c8:3b:73"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <mtu size="1442"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <target dev="tap063f8734-c7"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     </interface>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <serial type="pty">
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <log file="/var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/console.log" append="off"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     </serial>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <video>
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     </video>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <input type="tablet" bus="usb"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <rng model="virtio">
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <backend model="random">/dev/urandom</backend>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     </rng>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <controller type="usb" index="0"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     <memballoon model="virtio">
Jan 27 23:05:50 compute-0 nova_compute[185650]:       <stats period="10"/>
Jan 27 23:05:50 compute-0 nova_compute[185650]:     </memballoon>
Jan 27 23:05:50 compute-0 nova_compute[185650]:   </devices>
Jan 27 23:05:50 compute-0 nova_compute[185650]: </domain>
Jan 27 23:05:50 compute-0 nova_compute[185650]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.187 185654 DEBUG nova.compute.manager [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Preparing to wait for external event network-vif-plugged-063f8734-c708-4ac4-90bf-5a2100f150c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.187 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Acquiring lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.187 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.188 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.189 185654 DEBUG nova.virt.libvirt.vif [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T23:05:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-295068544',display_name='tempest-ServersTestJSON-server-295068544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-295068544',id=8,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOduhWHyouHCtRXfH7MrLfcwd0dJphJOUMH0Qoms/901k0RmU1WUrglIpw5S6nBg+kWfRVhjfT3WaO1uhXYyDW7tFhwKehJxN/isuJfe7J5L2LEWwrpRzA11HbJZ3RMe8A==',key_name='tempest-keypair-139774848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='99d030bedd674ca8aef409ccc5f31fd2',ramdisk_id='',reservation_id='r-nysq9n2c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1401357921',owner_user_name='tempest-ServersTestJSON-1401357921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T23:05:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b661812adddc45d4beba73ca32253b11',uuid=92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "063f8734-c708-4ac4-90bf-5a2100f150c8", "address": "fa:16:3e:c8:3b:73", "network": {"id": "ce133cd3-da57-40b5-95f2-7f015476df55", "bridge": "br-int", "label": "tempest-ServersTestJSON-441071384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99d030bedd674ca8aef409ccc5f31fd2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap063f8734-c7", "ovs_interfaceid": "063f8734-c708-4ac4-90bf-5a2100f150c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.189 185654 DEBUG nova.network.os_vif_util [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Converting VIF {"id": "063f8734-c708-4ac4-90bf-5a2100f150c8", "address": "fa:16:3e:c8:3b:73", "network": {"id": "ce133cd3-da57-40b5-95f2-7f015476df55", "bridge": "br-int", "label": "tempest-ServersTestJSON-441071384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99d030bedd674ca8aef409ccc5f31fd2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap063f8734-c7", "ovs_interfaceid": "063f8734-c708-4ac4-90bf-5a2100f150c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.190 185654 DEBUG nova.network.os_vif_util [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:3b:73,bridge_name='br-int',has_traffic_filtering=True,id=063f8734-c708-4ac4-90bf-5a2100f150c8,network=Network(ce133cd3-da57-40b5-95f2-7f015476df55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap063f8734-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.190 185654 DEBUG os_vif [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:3b:73,bridge_name='br-int',has_traffic_filtering=True,id=063f8734-c708-4ac4-90bf-5a2100f150c8,network=Network(ce133cd3-da57-40b5-95f2-7f015476df55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap063f8734-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.191 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.192 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.192 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.197 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.198 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap063f8734-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.198 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap063f8734-c7, col_values=(('external_ids', {'iface-id': '063f8734-c708-4ac4-90bf-5a2100f150c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:3b:73', 'vm-uuid': '92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.200 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:50 compute-0 NetworkManager[56600]: <info>  [1769555150.2012] manager: (tap063f8734-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.201 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.203 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.210 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555150.1376643, 66eb7f87-9511-4da7-8733-ef0673cfab67 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.211 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] VM Paused (Lifecycle Event)
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.217 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.218 185654 INFO os_vif [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:3b:73,bridge_name='br-int',has_traffic_filtering=True,id=063f8734-c708-4ac4-90bf-5a2100f150c8,network=Network(ce133cd3-da57-40b5-95f2-7f015476df55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap063f8734-c7')
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.251 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.255 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.276 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.289 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.289 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.290 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] No VIF found with MAC fa:16:3e:c8:3b:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.290 185654 INFO nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Using config drive
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.371 185654 DEBUG nova.compute.manager [req-c6ed9d1c-3364-4ced-a8c3-eac02d86f38f req-6e968018-95e6-4d92-9d9b-fa128aa6f813 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Received event network-changed-5c31fe8e-f952-4e71-b32a-ec4759a7fc07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.371 185654 DEBUG nova.compute.manager [req-c6ed9d1c-3364-4ced-a8c3-eac02d86f38f req-6e968018-95e6-4d92-9d9b-fa128aa6f813 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Refreshing instance network info cache due to event network-changed-5c31fe8e-f952-4e71-b32a-ec4759a7fc07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.371 185654 DEBUG oslo_concurrency.lockutils [req-c6ed9d1c-3364-4ced-a8c3-eac02d86f38f req-6e968018-95e6-4d92-9d9b-fa128aa6f813 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.371 185654 DEBUG oslo_concurrency.lockutils [req-c6ed9d1c-3364-4ced-a8c3-eac02d86f38f req-6e968018-95e6-4d92-9d9b-fa128aa6f813 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.372 185654 DEBUG nova.network.neutron [req-c6ed9d1c-3364-4ced-a8c3-eac02d86f38f req-6e968018-95e6-4d92-9d9b-fa128aa6f813 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Refreshing network info cache for port 5c31fe8e-f952-4e71-b32a-ec4759a7fc07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.572 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:50 compute-0 podman[248656]: 2026-01-27 23:05:50.64386502 +0000 UTC m=+0.075582714 container create 0f25fde85248a6c5d0e9ace1d8bf5c97199582866d88646db182d687db91ea4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 27 23:05:50 compute-0 systemd[1]: Started libpod-conmon-0f25fde85248a6c5d0e9ace1d8bf5c97199582866d88646db182d687db91ea4c.scope.
Jan 27 23:05:50 compute-0 podman[248656]: 2026-01-27 23:05:50.603864764 +0000 UTC m=+0.035582488 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 23:05:50 compute-0 systemd[1]: Started libcrun container.
Jan 27 23:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a247c0ef2a817c07457740235bf0aaed41e009eb35a59f1082e00f63727b3298/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 23:05:50 compute-0 podman[248656]: 2026-01-27 23:05:50.745941828 +0000 UTC m=+0.177659612 container init 0f25fde85248a6c5d0e9ace1d8bf5c97199582866d88646db182d687db91ea4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 23:05:50 compute-0 podman[248656]: 2026-01-27 23:05:50.753312375 +0000 UTC m=+0.185030099 container start 0f25fde85248a6c5d0e9ace1d8bf5c97199582866d88646db182d687db91ea4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.758 185654 INFO nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Creating config drive at /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.config
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.764 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe6uc_5z1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:50 compute-0 neutron-haproxy-ovnmeta-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4[248671]: [NOTICE]   (248675) : New worker (248678) forked
Jan 27 23:05:50 compute-0 neutron-haproxy-ovnmeta-6d0f9d9e-8cd6-4a68-8926-de88e69f60d4[248671]: [NOTICE]   (248675) : Loading success.
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.896 185654 DEBUG oslo_concurrency.processutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe6uc_5z1" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:50 compute-0 kernel: tap5c31fe8e-f9: entered promiscuous mode
Jan 27 23:05:50 compute-0 NetworkManager[56600]: <info>  [1769555150.9890] manager: (tap5c31fe8e-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Jan 27 23:05:50 compute-0 ovn_controller[98048]: 2026-01-27T23:05:50Z|00071|binding|INFO|Claiming lport 5c31fe8e-f952-4e71-b32a-ec4759a7fc07 for this chassis.
Jan 27 23:05:50 compute-0 ovn_controller[98048]: 2026-01-27T23:05:50Z|00072|binding|INFO|5c31fe8e-f952-4e71-b32a-ec4759a7fc07: Claiming fa:16:3e:81:28:a4 10.100.0.11
Jan 27 23:05:50 compute-0 nova_compute[185650]: 2026-01-27 23:05:50.997 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.014 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:28:a4 10.100.0.11'], port_security=['fa:16:3e:81:28:a4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9033d5a6-ab60-43e3-bbcb-3a8b83161c58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b56ee5fa-e690-4d9b-a6e1-7815589f421e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74f54dfa359341ba8894a95865378d18', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72da0c58-294e-4cba-af57-a14dc63a33bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=feace4b7-78ce-4312-a6f3-a86e625695ed, chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=5c31fe8e-f952-4e71-b32a-ec4759a7fc07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.016 107302 INFO neutron.agent.ovn.metadata.agent [-] Port 5c31fe8e-f952-4e71-b32a-ec4759a7fc07 in datapath b56ee5fa-e690-4d9b-a6e1-7815589f421e bound to our chassis
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.019 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b56ee5fa-e690-4d9b-a6e1-7815589f421e
Jan 27 23:05:51 compute-0 ovn_controller[98048]: 2026-01-27T23:05:51Z|00073|binding|INFO|Setting lport 5c31fe8e-f952-4e71-b32a-ec4759a7fc07 ovn-installed in OVS
Jan 27 23:05:51 compute-0 ovn_controller[98048]: 2026-01-27T23:05:51Z|00074|binding|INFO|Setting lport 5c31fe8e-f952-4e71-b32a-ec4759a7fc07 up in Southbound
Jan 27 23:05:51 compute-0 NetworkManager[56600]: <info>  [1769555151.0259] device (tap5c31fe8e-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 23:05:51 compute-0 NetworkManager[56600]: <info>  [1769555151.0265] device (tap5c31fe8e-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 23:05:51 compute-0 ovn_controller[98048]: 2026-01-27T23:05:51Z|00075|binding|INFO|Releasing lport babee362-409a-4d1f-bc47-c6a6dce734ff from this chassis (sb_readonly=0)
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.030 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.036 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[2c75a8f9-9dbf-4a0d-91d8-e9c84e59f03a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.037 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb56ee5fa-e1 in ovnmeta-b56ee5fa-e690-4d9b-a6e1-7815589f421e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.039 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.039 238735 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb56ee5fa-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.039 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[11ee37ff-4d01-4e16-be15-4d17bce03112]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.040 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ab7511-068f-468b-99ca-10f9acb49ef1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.045 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:51 compute-0 systemd-machined[157036]: New machine qemu-7-instance-00000007.
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.054 107797 DEBUG oslo.privsep.daemon [-] privsep: reply[b94679f6-d584-4506-a6d0-448a5b1fd1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.089 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[f95868c5-6893-4cba-8cdd-fcfb3617daf8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.122 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f2e4c3-5f91-4dae-9c93-7d2a82e57b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.143 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[161ba478-96ce-43cf-a53b-eeccb2677be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 NetworkManager[56600]: <info>  [1769555151.1603] manager: (tapb56ee5fa-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.190 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[7605132c-8fd6-42ed-8089-a8e1637b0636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.193 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6d116d-aafb-4050-a94a-1ef562d6646a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 NetworkManager[56600]: <info>  [1769555151.2221] device (tapb56ee5fa-e0): carrier: link connected
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.228 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[108f426a-2579-4776-a348-2916f762f4db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.244 185654 INFO nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Creating config drive at /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk.config
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.249 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo9kz7r4a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.249 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[70b6861b-7061-4e9d-9361-1e2cf99307ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb56ee5fa-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f3:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498281, 'reachable_time': 30413, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248723, 'error': None, 'target': 'ovnmeta-b56ee5fa-e690-4d9b-a6e1-7815589f421e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 ovn_controller[98048]: 2026-01-27T23:05:51Z|00076|binding|INFO|Releasing lport babee362-409a-4d1f-bc47-c6a6dce734ff from this chassis (sb_readonly=0)
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.266 185654 DEBUG nova.network.neutron [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Successfully updated port: 09ecb7c4-8334-4e9d-8fbc-d238d1a73476 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.269 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.276 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[4e4342c2-f2dd-4d54-8717-b6bf4f0f5d94]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:f3f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498281, 'tstamp': 498281}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248727, 'error': None, 'target': 'ovnmeta-b56ee5fa-e690-4d9b-a6e1-7815589f421e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.298 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[b1681089-9f62-4780-bbd2-c7b2e98c985c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb56ee5fa-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f3:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498281, 'reachable_time': 30413, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248730, 'error': None, 'target': 'ovnmeta-b56ee5fa-e690-4d9b-a6e1-7815589f421e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.302 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Acquiring lock "refresh_cache-a5213d25-e31d-4018-991a-ffcc9a3cf495" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.303 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Acquired lock "refresh_cache-a5213d25-e31d-4018-991a-ffcc9a3cf495" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.304 185654 DEBUG nova.network.neutron [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.335 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[8f30c822-0391-43e9-ac27-f92a75f6725a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.373 185654 DEBUG oslo_concurrency.processutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo9kz7r4a" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.412 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[89faacef-c6c4-421b-a820-6d5032431ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.414 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb56ee5fa-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.414 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.415 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb56ee5fa-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.417 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:51 compute-0 kernel: tapb56ee5fa-e0: entered promiscuous mode
Jan 27 23:05:51 compute-0 NetworkManager[56600]: <info>  [1769555151.4210] manager: (tapb56ee5fa-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.428 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.430 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb56ee5fa-e0, col_values=(('external_ids', {'iface-id': '41776a65-3925-474f-a135-3e28059d7e34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:51 compute-0 ovn_controller[98048]: 2026-01-27T23:05:51Z|00077|binding|INFO|Releasing lport 41776a65-3925-474f-a135-3e28059d7e34 from this chassis (sb_readonly=0)
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.432 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:51 compute-0 NetworkManager[56600]: <info>  [1769555151.4452] manager: (tap063f8734-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.452 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:51 compute-0 kernel: tap063f8734-c7: entered promiscuous mode
Jan 27 23:05:51 compute-0 ovn_controller[98048]: 2026-01-27T23:05:51Z|00078|binding|INFO|Claiming lport 063f8734-c708-4ac4-90bf-5a2100f150c8 for this chassis.
Jan 27 23:05:51 compute-0 ovn_controller[98048]: 2026-01-27T23:05:51Z|00079|binding|INFO|063f8734-c708-4ac4-90bf-5a2100f150c8: Claiming fa:16:3e:c8:3b:73 10.100.0.9
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.460 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.461 107302 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b56ee5fa-e690-4d9b-a6e1-7815589f421e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b56ee5fa-e690-4d9b-a6e1-7815589f421e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.462 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[541b4eef-0549-4694-af17-50e94b070f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.462 107302 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: global
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     log         /dev/log local0 debug
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     log-tag     haproxy-metadata-proxy-b56ee5fa-e690-4d9b-a6e1-7815589f421e
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     user        root
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     group       root
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     maxconn     1024
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     pidfile     /var/lib/neutron/external/pids/b56ee5fa-e690-4d9b-a6e1-7815589f421e.pid.haproxy
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     daemon
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: defaults
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     log global
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     mode http
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     option httplog
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     option dontlognull
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     option http-server-close
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     option forwardfor
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     retries                 3
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     timeout http-request    30s
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     timeout connect         30s
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     timeout client          32s
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     timeout server          32s
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     timeout http-keep-alive 30s
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: listen listener
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     bind 169.254.169.254:80
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:     http-request add-header X-OVN-Network-ID b56ee5fa-e690-4d9b-a6e1-7815589f421e
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.463 107302 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b56ee5fa-e690-4d9b-a6e1-7815589f421e', 'env', 'PROCESS_TAG=haproxy-b56ee5fa-e690-4d9b-a6e1-7815589f421e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b56ee5fa-e690-4d9b-a6e1-7815589f421e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 23:05:51 compute-0 NetworkManager[56600]: <info>  [1769555151.4688] device (tap063f8734-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 23:05:51 compute-0 NetworkManager[56600]: <info>  [1769555151.4695] device (tap063f8734-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 23:05:51 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:51.482 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:3b:73 10.100.0.9'], port_security=['fa:16:3e:c8:3b:73 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce133cd3-da57-40b5-95f2-7f015476df55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '99d030bedd674ca8aef409ccc5f31fd2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f02aeabb-306c-4ede-94b2-5a8ec614cb76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=692d1d1d-750e-40d5-9626-e11cb5b102db, chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=063f8734-c708-4ac4-90bf-5a2100f150c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:05:51 compute-0 systemd-machined[157036]: New machine qemu-8-instance-00000008.
Jan 27 23:05:51 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 27 23:05:51 compute-0 ovn_controller[98048]: 2026-01-27T23:05:51Z|00080|binding|INFO|Setting lport 063f8734-c708-4ac4-90bf-5a2100f150c8 ovn-installed in OVS
Jan 27 23:05:51 compute-0 ovn_controller[98048]: 2026-01-27T23:05:51Z|00081|binding|INFO|Setting lport 063f8734-c708-4ac4-90bf-5a2100f150c8 up in Southbound
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.532 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.623 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555151.6232944, 9033d5a6-ab60-43e3-bbcb-3a8b83161c58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.624 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] VM Started (Lifecycle Event)
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.645 185654 DEBUG nova.network.neutron [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.745 185654 DEBUG nova.compute.manager [req-c3d1dc60-36af-40d0-bd8a-5f22840ee5f3 req-0ac2cef5-6200-4afc-bca8-75393ea581e3 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Received event network-vif-plugged-64b86a6b-6de4-4fee-917e-229794042e8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.746 185654 DEBUG oslo_concurrency.lockutils [req-c3d1dc60-36af-40d0-bd8a-5f22840ee5f3 req-0ac2cef5-6200-4afc-bca8-75393ea581e3 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "66eb7f87-9511-4da7-8733-ef0673cfab67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.747 185654 DEBUG oslo_concurrency.lockutils [req-c3d1dc60-36af-40d0-bd8a-5f22840ee5f3 req-0ac2cef5-6200-4afc-bca8-75393ea581e3 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "66eb7f87-9511-4da7-8733-ef0673cfab67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.748 185654 DEBUG oslo_concurrency.lockutils [req-c3d1dc60-36af-40d0-bd8a-5f22840ee5f3 req-0ac2cef5-6200-4afc-bca8-75393ea581e3 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "66eb7f87-9511-4da7-8733-ef0673cfab67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.748 185654 DEBUG nova.compute.manager [req-c3d1dc60-36af-40d0-bd8a-5f22840ee5f3 req-0ac2cef5-6200-4afc-bca8-75393ea581e3 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Processing event network-vif-plugged-64b86a6b-6de4-4fee-917e-229794042e8e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.749 185654 DEBUG nova.compute.manager [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.761 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.767 185654 INFO nova.virt.libvirt.driver [-] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Instance spawned successfully.
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.768 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.798 185654 DEBUG nova.network.neutron [req-30e5d05d-a821-482c-9b87-8158d3f70ca6 req-fa56a4e0-c76f-4a2f-a246-f4c675ea38c4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Updated VIF entry in instance network info cache for port 64b86a6b-6de4-4fee-917e-229794042e8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.800 185654 DEBUG nova.network.neutron [req-30e5d05d-a821-482c-9b87-8158d3f70ca6 req-fa56a4e0-c76f-4a2f-a246-f4c675ea38c4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Updating instance_info_cache with network_info: [{"id": "64b86a6b-6de4-4fee-917e-229794042e8e", "address": "fa:16:3e:23:60:c6", "network": {"id": "6d0f9d9e-8cd6-4a68-8926-de88e69f60d4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1504245290-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "270690dca2514a49843b866111c87d39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b86a6b-6d", "ovs_interfaceid": "64b86a6b-6de4-4fee-917e-229794042e8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.806 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.807 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.808 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.808 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.809 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.810 185654 DEBUG nova.virt.libvirt.driver [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.814 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.825 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555151.6235428, 9033d5a6-ab60-43e3-bbcb-3a8b83161c58 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.826 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] VM Paused (Lifecycle Event)
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.856 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.859 185654 DEBUG oslo_concurrency.lockutils [req-30e5d05d-a821-482c-9b87-8158d3f70ca6 req-fa56a4e0-c76f-4a2f-a246-f4c675ea38c4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.865 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.887 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.888 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555151.7595773, 66eb7f87-9511-4da7-8733-ef0673cfab67 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.889 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] VM Resumed (Lifecycle Event)
Jan 27 23:05:51 compute-0 podman[248791]: 2026-01-27 23:05:51.936430451 +0000 UTC m=+0.073592130 container create 03bc964949d7b32af92c67647606a81f6ff4de911d2a5f0092b22cfbfa4783b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b56ee5fa-e690-4d9b-a6e1-7815589f421e, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.949 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.956 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 23:05:51 compute-0 nova_compute[185650]: 2026-01-27 23:05:51.976 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 23:05:51 compute-0 podman[248791]: 2026-01-27 23:05:51.899266812 +0000 UTC m=+0.036428501 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 23:05:52 compute-0 systemd[1]: Started libpod-conmon-03bc964949d7b32af92c67647606a81f6ff4de911d2a5f0092b22cfbfa4783b5.scope.
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.032 185654 INFO nova.compute.manager [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Took 12.95 seconds to spawn the instance on the hypervisor.
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.034 185654 DEBUG nova.compute.manager [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:52 compute-0 systemd[1]: Started libcrun container.
Jan 27 23:05:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954d591d6b8aac7e13ddeb7f9cb9148d24ecf46823f44e54559126f287d35ee8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 23:05:52 compute-0 podman[248791]: 2026-01-27 23:05:52.060591448 +0000 UTC m=+0.197753147 container init 03bc964949d7b32af92c67647606a81f6ff4de911d2a5f0092b22cfbfa4783b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b56ee5fa-e690-4d9b-a6e1-7815589f421e, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 27 23:05:52 compute-0 podman[248791]: 2026-01-27 23:05:52.067838091 +0000 UTC m=+0.204999770 container start 03bc964949d7b32af92c67647606a81f6ff4de911d2a5f0092b22cfbfa4783b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b56ee5fa-e690-4d9b-a6e1-7815589f421e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 23:05:52 compute-0 neutron-haproxy-ovnmeta-b56ee5fa-e690-4d9b-a6e1-7815589f421e[248805]: [NOTICE]   (248814) : New worker (248816) forked
Jan 27 23:05:52 compute-0 neutron-haproxy-ovnmeta-b56ee5fa-e690-4d9b-a6e1-7815589f421e[248805]: [NOTICE]   (248814) : Loading success.
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.130 185654 INFO nova.compute.manager [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Took 13.49 seconds to build instance.
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.140 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555152.139479, 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.141 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] VM Started (Lifecycle Event)
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.153 185654 DEBUG oslo_concurrency.lockutils [None req-ee126b6d-0615-4f3c-9b86-e75e6736d3a7 4ed42d6c691545f987cae97bc62b185c 270690dca2514a49843b866111c87d39 - - default default] Lock "66eb7f87-9511-4da7-8733-ef0673cfab67" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.161 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.166 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555152.1395917, 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.170 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] VM Paused (Lifecycle Event)
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.171 107302 INFO neutron.agent.ovn.metadata.agent [-] Port 063f8734-c708-4ac4-90bf-5a2100f150c8 in datapath ce133cd3-da57-40b5-95f2-7f015476df55 unbound from our chassis
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.174 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce133cd3-da57-40b5-95f2-7f015476df55
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.185 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[88243e8a-bdbc-4773-81e6-f684cf262979]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.186 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce133cd3-d1 in ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.188 238735 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce133cd3-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.188 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[ae261c9b-01f3-4ea4-9e84-52215a1ab571]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.189 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[1c873089-db0f-4bb9-aa3e-b27b104f4b7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.190 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.195 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.202 107797 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ffee30-29d5-4b94-9600-405a30a39ce1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.226 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[80d767d5-8b24-4dac-89f6-124f0aa1f619]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.252 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.258 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[350707a7-bf21-4756-a12f-40862b25d1f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.279 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[5beed78c-4ec9-4e53-8001-3c1533791f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 NetworkManager[56600]: <info>  [1769555152.2813] manager: (tapce133cd3-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.317 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[75d573db-f470-4b59-83ba-21bf06e36396]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.320 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb06359-9794-486c-9bdd-0a78c84e0164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 NetworkManager[56600]: <info>  [1769555152.3428] device (tapce133cd3-d0): carrier: link connected
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.350 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[d77471bb-de35-4097-8cae-9afa1369d0dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.367 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7b4272-53f7-4101-929b-9396e6ae4691]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce133cd3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:8e:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498393, 'reachable_time': 37670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248838, 'error': None, 'target': 'ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.386 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[1602bd82-88a7-41ff-ab0d-978717f2464e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:8eda'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498393, 'tstamp': 498393}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248839, 'error': None, 'target': 'ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.405 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[bbce9349-3425-4d0d-bb47-9eb41d812e46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce133cd3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:8e:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498393, 'reachable_time': 37670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248840, 'error': None, 'target': 'ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.435 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[6f084a80-4f73-45e0-9952-84fa0c6e0697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.509 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[f0403fcb-bf56-438a-b2ff-bd366ab59731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.510 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce133cd3-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.511 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.511 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce133cd3-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:52 compute-0 NetworkManager[56600]: <info>  [1769555152.5146] manager: (tapce133cd3-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.513 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:52 compute-0 kernel: tapce133cd3-d0: entered promiscuous mode
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.519 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.520 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce133cd3-d0, col_values=(('external_ids', {'iface-id': '05067d9f-6d8c-4f3c-a42f-40ac1462630e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.521 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:52 compute-0 ovn_controller[98048]: 2026-01-27T23:05:52Z|00082|binding|INFO|Releasing lport 05067d9f-6d8c-4f3c-a42f-40ac1462630e from this chassis (sb_readonly=0)
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.523 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.523 107302 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce133cd3-da57-40b5-95f2-7f015476df55.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce133cd3-da57-40b5-95f2-7f015476df55.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.524 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[1e8174ef-504a-4f95-9261-6410d534ecae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.525 107302 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: global
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     log         /dev/log local0 debug
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     log-tag     haproxy-metadata-proxy-ce133cd3-da57-40b5-95f2-7f015476df55
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     user        root
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     group       root
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     maxconn     1024
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     pidfile     /var/lib/neutron/external/pids/ce133cd3-da57-40b5-95f2-7f015476df55.pid.haproxy
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     daemon
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: defaults
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     log global
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     mode http
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     option httplog
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     option dontlognull
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     option http-server-close
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     option forwardfor
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     retries                 3
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     timeout http-request    30s
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     timeout connect         30s
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     timeout client          32s
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     timeout server          32s
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     timeout http-keep-alive 30s
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: listen listener
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     bind 169.254.169.254:80
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:     http-request add-header X-OVN-Network-ID ce133cd3-da57-40b5-95f2-7f015476df55
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 23:05:52 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:52.526 107302 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55', 'env', 'PROCESS_TAG=haproxy-ce133cd3-da57-40b5-95f2-7f015476df55', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce133cd3-da57-40b5-95f2-7f015476df55.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.534 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.802 185654 DEBUG nova.compute.manager [req-4317266c-21e7-4d69-a1a2-c29aa44aee52 req-922a59bf-e618-4360-8996-4ac99357227f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Received event network-changed-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.803 185654 DEBUG nova.compute.manager [req-4317266c-21e7-4d69-a1a2-c29aa44aee52 req-922a59bf-e618-4360-8996-4ac99357227f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Refreshing instance network info cache due to event network-changed-09ecb7c4-8334-4e9d-8fbc-d238d1a73476. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.803 185654 DEBUG oslo_concurrency.lockutils [req-4317266c-21e7-4d69-a1a2-c29aa44aee52 req-922a59bf-e618-4360-8996-4ac99357227f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-a5213d25-e31d-4018-991a-ffcc9a3cf495" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.887 185654 DEBUG nova.network.neutron [req-4aae8a2e-2b9c-435d-90ed-2f3530aaeaec req-9ffe50a4-02e0-40d9-8d92-9664c484cf42 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Updated VIF entry in instance network info cache for port 063f8734-c708-4ac4-90bf-5a2100f150c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.888 185654 DEBUG nova.network.neutron [req-4aae8a2e-2b9c-435d-90ed-2f3530aaeaec req-9ffe50a4-02e0-40d9-8d92-9664c484cf42 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Updating instance_info_cache with network_info: [{"id": "063f8734-c708-4ac4-90bf-5a2100f150c8", "address": "fa:16:3e:c8:3b:73", "network": {"id": "ce133cd3-da57-40b5-95f2-7f015476df55", "bridge": "br-int", "label": "tempest-ServersTestJSON-441071384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99d030bedd674ca8aef409ccc5f31fd2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap063f8734-c7", "ovs_interfaceid": "063f8734-c708-4ac4-90bf-5a2100f150c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:05:52 compute-0 nova_compute[185650]: 2026-01-27 23:05:52.930 185654 DEBUG oslo_concurrency.lockutils [req-4aae8a2e-2b9c-435d-90ed-2f3530aaeaec req-9ffe50a4-02e0-40d9-8d92-9664c484cf42 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:05:52 compute-0 podman[248873]: 2026-01-27 23:05:52.939911914 +0000 UTC m=+0.100436985 container create 0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 27 23:05:52 compute-0 podman[248873]: 2026-01-27 23:05:52.877883843 +0000 UTC m=+0.038408914 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 23:05:52 compute-0 systemd[1]: Started libpod-conmon-0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79.scope.
Jan 27 23:05:53 compute-0 systemd[1]: Started libcrun container.
Jan 27 23:05:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a6b5148f31003b152a5646e3df7bff020019a8c9c708f35bdb3dcfdba17e55/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 23:05:53 compute-0 podman[248873]: 2026-01-27 23:05:53.041594553 +0000 UTC m=+0.202119624 container init 0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 27 23:05:53 compute-0 podman[248873]: 2026-01-27 23:05:53.049333158 +0000 UTC m=+0.209858229 container start 0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 27 23:05:53 compute-0 neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55[248888]: [NOTICE]   (248892) : New worker (248894) forked
Jan 27 23:05:53 compute-0 neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55[248888]: [NOTICE]   (248892) : Loading success.
Jan 27 23:05:53 compute-0 nova_compute[185650]: 2026-01-27 23:05:53.839 185654 DEBUG nova.network.neutron [req-c6ed9d1c-3364-4ced-a8c3-eac02d86f38f req-6e968018-95e6-4d92-9d9b-fa128aa6f813 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Updated VIF entry in instance network info cache for port 5c31fe8e-f952-4e71-b32a-ec4759a7fc07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 23:05:53 compute-0 nova_compute[185650]: 2026-01-27 23:05:53.840 185654 DEBUG nova.network.neutron [req-c6ed9d1c-3364-4ced-a8c3-eac02d86f38f req-6e968018-95e6-4d92-9d9b-fa128aa6f813 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Updating instance_info_cache with network_info: [{"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:05:53 compute-0 nova_compute[185650]: 2026-01-27 23:05:53.869 185654 DEBUG oslo_concurrency.lockutils [req-c6ed9d1c-3364-4ced-a8c3-eac02d86f38f req-6e968018-95e6-4d92-9d9b-fa128aa6f813 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.167 185654 DEBUG nova.network.neutron [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Updating instance_info_cache with network_info: [{"id": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "address": "fa:16:3e:3e:5b:de", "network": {"id": "52166b3a-c6fd-46c4-9a20-10228c0e8119", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-393615870-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1841b657d00c42cba8cf6368908d3e05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ecb7c4-83", "ovs_interfaceid": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.202 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Releasing lock "refresh_cache-a5213d25-e31d-4018-991a-ffcc9a3cf495" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.202 185654 DEBUG nova.compute.manager [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Instance network_info: |[{"id": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "address": "fa:16:3e:3e:5b:de", "network": {"id": "52166b3a-c6fd-46c4-9a20-10228c0e8119", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-393615870-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1841b657d00c42cba8cf6368908d3e05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ecb7c4-83", "ovs_interfaceid": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.203 185654 DEBUG oslo_concurrency.lockutils [req-4317266c-21e7-4d69-a1a2-c29aa44aee52 req-922a59bf-e618-4360-8996-4ac99357227f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-a5213d25-e31d-4018-991a-ffcc9a3cf495" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.203 185654 DEBUG nova.network.neutron [req-4317266c-21e7-4d69-a1a2-c29aa44aee52 req-922a59bf-e618-4360-8996-4ac99357227f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Refreshing network info cache for port 09ecb7c4-8334-4e9d-8fbc-d238d1a73476 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.207 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Start _get_guest_xml network_info=[{"id": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "address": "fa:16:3e:3e:5b:de", "network": {"id": "52166b3a-c6fd-46c4-9a20-10228c0e8119", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-393615870-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1841b657d00c42cba8cf6368908d3e05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ecb7c4-83", "ovs_interfaceid": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T23:04:44Z,direct_url=<?>,disk_format='qcow2',id=319632d9-1bdd-4de0-b1d2-0507a3e91b6b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T23:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '319632d9-1bdd-4de0-b1d2-0507a3e91b6b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.213 185654 WARNING nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.224 185654 DEBUG nova.virt.libvirt.host [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.226 185654 DEBUG nova.virt.libvirt.host [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.234 185654 DEBUG nova.virt.libvirt.host [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.235 185654 DEBUG nova.virt.libvirt.host [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.236 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.237 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T23:04:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d732a0b9-79cd-4ff7-8741-11ae188a8b69',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T23:04:44Z,direct_url=<?>,disk_format='qcow2',id=319632d9-1bdd-4de0-b1d2-0507a3e91b6b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T23:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.238 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.239 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.239 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.240 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.241 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.241 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.242 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.243 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.243 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.244 185654 DEBUG nova.virt.hardware [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.248 185654 DEBUG nova.virt.libvirt.vif [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T23:05:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1101404520',display_name='tempest-ServerAddressesTestJSON-server-1101404520',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1101404520',id=9,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1841b657d00c42cba8cf6368908d3e05',ramdisk_id='',reservation_id='r-zhd9h390',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1567840329',owner_user_name='tempest-ServerAddressesTestJSON-1567840329-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T23:05:44Z,user_data=None,user_id='97de12b7dcf64c95a6ef85a1de71a992',uuid=a5213d25-e31d-4018-991a-ffcc9a3cf495,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "address": "fa:16:3e:3e:5b:de", "network": {"id": "52166b3a-c6fd-46c4-9a20-10228c0e8119", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-393615870-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1841b657d00c42cba8cf6368908d3e05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ecb7c4-83", "ovs_interfaceid": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.249 185654 DEBUG nova.network.os_vif_util [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Converting VIF {"id": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "address": "fa:16:3e:3e:5b:de", "network": {"id": "52166b3a-c6fd-46c4-9a20-10228c0e8119", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-393615870-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1841b657d00c42cba8cf6368908d3e05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ecb7c4-83", "ovs_interfaceid": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.251 185654 DEBUG nova.network.os_vif_util [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:5b:de,bridge_name='br-int',has_traffic_filtering=True,id=09ecb7c4-8334-4e9d-8fbc-d238d1a73476,network=Network(52166b3a-c6fd-46c4-9a20-10228c0e8119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ecb7c4-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.252 185654 DEBUG nova.objects.instance [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5213d25-e31d-4018-991a-ffcc9a3cf495 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.287 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] End _get_guest_xml xml=<domain type="kvm">
Jan 27 23:05:54 compute-0 nova_compute[185650]:   <uuid>a5213d25-e31d-4018-991a-ffcc9a3cf495</uuid>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   <name>instance-00000009</name>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   <memory>131072</memory>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   <vcpu>1</vcpu>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   <metadata>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <nova:name>tempest-ServerAddressesTestJSON-server-1101404520</nova:name>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <nova:creationTime>2026-01-27 23:05:54</nova:creationTime>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <nova:flavor name="m1.nano">
Jan 27 23:05:54 compute-0 nova_compute[185650]:         <nova:memory>128</nova:memory>
Jan 27 23:05:54 compute-0 nova_compute[185650]:         <nova:disk>1</nova:disk>
Jan 27 23:05:54 compute-0 nova_compute[185650]:         <nova:swap>0</nova:swap>
Jan 27 23:05:54 compute-0 nova_compute[185650]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 23:05:54 compute-0 nova_compute[185650]:         <nova:vcpus>1</nova:vcpus>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       </nova:flavor>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <nova:owner>
Jan 27 23:05:54 compute-0 nova_compute[185650]:         <nova:user uuid="97de12b7dcf64c95a6ef85a1de71a992">tempest-ServerAddressesTestJSON-1567840329-project-member</nova:user>
Jan 27 23:05:54 compute-0 nova_compute[185650]:         <nova:project uuid="1841b657d00c42cba8cf6368908d3e05">tempest-ServerAddressesTestJSON-1567840329</nova:project>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       </nova:owner>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <nova:root type="image" uuid="319632d9-1bdd-4de0-b1d2-0507a3e91b6b"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <nova:ports>
Jan 27 23:05:54 compute-0 nova_compute[185650]:         <nova:port uuid="09ecb7c4-8334-4e9d-8fbc-d238d1a73476">
Jan 27 23:05:54 compute-0 nova_compute[185650]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:         </nova:port>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       </nova:ports>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     </nova:instance>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   </metadata>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   <sysinfo type="smbios">
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <system>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <entry name="manufacturer">RDO</entry>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <entry name="product">OpenStack Compute</entry>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <entry name="serial">a5213d25-e31d-4018-991a-ffcc9a3cf495</entry>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <entry name="uuid">a5213d25-e31d-4018-991a-ffcc9a3cf495</entry>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <entry name="family">Virtual Machine</entry>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     </system>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   </sysinfo>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   <os>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <boot dev="hd"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <smbios mode="sysinfo"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   </os>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   <features>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <acpi/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <apic/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <vmcoreinfo/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   </features>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   <clock offset="utc">
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <timer name="hpet" present="no"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   </clock>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   <cpu mode="host-model" match="exact">
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   </cpu>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   <devices>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <target dev="vda" bus="virtio"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     </disk>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <disk type="file" device="cdrom">
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk.config"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <target dev="sda" bus="sata"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     </disk>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <interface type="ethernet">
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <mac address="fa:16:3e:3e:5b:de"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <mtu size="1442"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <target dev="tap09ecb7c4-83"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     </interface>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <serial type="pty">
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <log file="/var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/console.log" append="off"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     </serial>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <video>
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     </video>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <input type="tablet" bus="usb"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <rng model="virtio">
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <backend model="random">/dev/urandom</backend>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     </rng>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <controller type="usb" index="0"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     <memballoon model="virtio">
Jan 27 23:05:54 compute-0 nova_compute[185650]:       <stats period="10"/>
Jan 27 23:05:54 compute-0 nova_compute[185650]:     </memballoon>
Jan 27 23:05:54 compute-0 nova_compute[185650]:   </devices>
Jan 27 23:05:54 compute-0 nova_compute[185650]: </domain>
Jan 27 23:05:54 compute-0 nova_compute[185650]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.295 185654 DEBUG nova.compute.manager [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Preparing to wait for external event network-vif-plugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.296 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Acquiring lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.296 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.297 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.297 185654 DEBUG nova.virt.libvirt.vif [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T23:05:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1101404520',display_name='tempest-ServerAddressesTestJSON-server-1101404520',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1101404520',id=9,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1841b657d00c42cba8cf6368908d3e05',ramdisk_id='',reservation_id='r-zhd9h390',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1567840329',owner_user_name='tempest-ServerAddressesTestJSON-1567840329-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T23:05:44Z,user_data=None,user_id='97de12b7dcf64c95a6ef85a1de71a992',uuid=a5213d25-e31d-4018-991a-ffcc9a3cf495,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "address": "fa:16:3e:3e:5b:de", "network": {"id": "52166b3a-c6fd-46c4-9a20-10228c0e8119", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-393615870-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1841b657d00c42cba8cf6368908d3e05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ecb7c4-83", "ovs_interfaceid": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.298 185654 DEBUG nova.network.os_vif_util [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Converting VIF {"id": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "address": "fa:16:3e:3e:5b:de", "network": {"id": "52166b3a-c6fd-46c4-9a20-10228c0e8119", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-393615870-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1841b657d00c42cba8cf6368908d3e05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ecb7c4-83", "ovs_interfaceid": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.299 185654 DEBUG nova.network.os_vif_util [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:5b:de,bridge_name='br-int',has_traffic_filtering=True,id=09ecb7c4-8334-4e9d-8fbc-d238d1a73476,network=Network(52166b3a-c6fd-46c4-9a20-10228c0e8119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ecb7c4-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.299 185654 DEBUG os_vif [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:5b:de,bridge_name='br-int',has_traffic_filtering=True,id=09ecb7c4-8334-4e9d-8fbc-d238d1a73476,network=Network(52166b3a-c6fd-46c4-9a20-10228c0e8119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ecb7c4-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.300 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.300 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.301 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.304 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.304 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ecb7c4-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.305 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09ecb7c4-83, col_values=(('external_ids', {'iface-id': '09ecb7c4-8334-4e9d-8fbc-d238d1a73476', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:5b:de', 'vm-uuid': 'a5213d25-e31d-4018-991a-ffcc9a3cf495'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.306 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:54 compute-0 NetworkManager[56600]: <info>  [1769555154.3078] manager: (tap09ecb7c4-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.310 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.314 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.314 185654 INFO os_vif [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:5b:de,bridge_name='br-int',has_traffic_filtering=True,id=09ecb7c4-8334-4e9d-8fbc-d238d1a73476,network=Network(52166b3a-c6fd-46c4-9a20-10228c0e8119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ecb7c4-83')
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.396 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.397 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.397 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] No VIF found with MAC fa:16:3e:3e:5b:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 23:05:54 compute-0 nova_compute[185650]: 2026-01-27 23:05:54.398 185654 INFO nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Using config drive
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.151 185654 DEBUG nova.compute.manager [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Received event network-vif-plugged-64b86a6b-6de4-4fee-917e-229794042e8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.152 185654 DEBUG oslo_concurrency.lockutils [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "66eb7f87-9511-4da7-8733-ef0673cfab67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.153 185654 DEBUG oslo_concurrency.lockutils [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "66eb7f87-9511-4da7-8733-ef0673cfab67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.153 185654 DEBUG oslo_concurrency.lockutils [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "66eb7f87-9511-4da7-8733-ef0673cfab67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.153 185654 DEBUG nova.compute.manager [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] No waiting events found dispatching network-vif-plugged-64b86a6b-6de4-4fee-917e-229794042e8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.154 185654 WARNING nova.compute.manager [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Received unexpected event network-vif-plugged-64b86a6b-6de4-4fee-917e-229794042e8e for instance with vm_state active and task_state None.
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.154 185654 DEBUG nova.compute.manager [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Received event network-vif-plugged-5c31fe8e-f952-4e71-b32a-ec4759a7fc07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.154 185654 DEBUG oslo_concurrency.lockutils [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.155 185654 DEBUG oslo_concurrency.lockutils [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.155 185654 DEBUG oslo_concurrency.lockutils [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.156 185654 DEBUG nova.compute.manager [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Processing event network-vif-plugged-5c31fe8e-f952-4e71-b32a-ec4759a7fc07 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.156 185654 DEBUG nova.compute.manager [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Received event network-vif-plugged-5c31fe8e-f952-4e71-b32a-ec4759a7fc07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.156 185654 DEBUG oslo_concurrency.lockutils [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.157 185654 DEBUG oslo_concurrency.lockutils [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.157 185654 DEBUG oslo_concurrency.lockutils [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.157 185654 DEBUG nova.compute.manager [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] No waiting events found dispatching network-vif-plugged-5c31fe8e-f952-4e71-b32a-ec4759a7fc07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.158 185654 WARNING nova.compute.manager [req-f82da44c-b087-42fc-acaa-ad6b017fad2a req-93053c74-bdd7-49be-a635-73e2d72f7c8a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Received unexpected event network-vif-plugged-5c31fe8e-f952-4e71-b32a-ec4759a7fc07 for instance with vm_state building and task_state spawning.
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.159 185654 DEBUG nova.compute.manager [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.166 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.171 185654 INFO nova.virt.libvirt.driver [-] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Instance spawned successfully.
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.171 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.183 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555155.1829364, 9033d5a6-ab60-43e3-bbcb-3a8b83161c58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.183 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] VM Resumed (Lifecycle Event)
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.200 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.200 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.201 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.202 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.202 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.203 185654 DEBUG nova.virt.libvirt.driver [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.412 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.424 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.521 185654 INFO nova.compute.manager [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Took 15.56 seconds to spawn the instance on the hypervisor.
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.521 185654 DEBUG nova.compute.manager [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.577 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.619 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.755 185654 INFO nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Creating config drive at /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk.config
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.761 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpws0_7340 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:05:55 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.887 185654 DEBUG oslo_concurrency.processutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpws0_7340" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:05:55 compute-0 kernel: tap09ecb7c4-83: entered promiscuous mode
Jan 27 23:05:55 compute-0 NetworkManager[56600]: <info>  [1769555155.9885] manager: (tap09ecb7c4-83): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Jan 27 23:05:55 compute-0 ovn_controller[98048]: 2026-01-27T23:05:55Z|00083|binding|INFO|Claiming lport 09ecb7c4-8334-4e9d-8fbc-d238d1a73476 for this chassis.
Jan 27 23:05:55 compute-0 ovn_controller[98048]: 2026-01-27T23:05:55Z|00084|binding|INFO|09ecb7c4-8334-4e9d-8fbc-d238d1a73476: Claiming fa:16:3e:3e:5b:de 10.100.0.10
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:55.997 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:56 compute-0 systemd-udevd[248942]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 23:05:56 compute-0 systemd-machined[157036]: New machine qemu-9-instance-00000009.
Jan 27 23:05:56 compute-0 NetworkManager[56600]: <info>  [1769555156.0620] device (tap09ecb7c4-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 23:05:56 compute-0 NetworkManager[56600]: <info>  [1769555156.0626] device (tap09ecb7c4-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 23:05:56 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.066 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:56 compute-0 ovn_controller[98048]: 2026-01-27T23:05:56Z|00085|binding|INFO|Setting lport 09ecb7c4-8334-4e9d-8fbc-d238d1a73476 ovn-installed in OVS
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.075 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:56 compute-0 podman[248916]: 2026-01-27 23:05:56.109290377 +0000 UTC m=+0.140002620 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Jan 27 23:05:56 compute-0 podman[248917]: 2026-01-27 23:05:56.125825507 +0000 UTC m=+0.152298207 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_id=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.138 185654 INFO nova.compute.manager [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Took 16.59 seconds to build instance.
Jan 27 23:05:56 compute-0 ovn_controller[98048]: 2026-01-27T23:05:56Z|00086|binding|INFO|Setting lport 09ecb7c4-8334-4e9d-8fbc-d238d1a73476 up in Southbound
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.164 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:5b:de 10.100.0.10'], port_security=['fa:16:3e:3e:5b:de 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5213d25-e31d-4018-991a-ffcc9a3cf495', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52166b3a-c6fd-46c4-9a20-10228c0e8119', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1841b657d00c42cba8cf6368908d3e05', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a25c1474-cdc4-45ab-8ec7-32034a13f236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d06d7c56-fdba-4ff8-88aa-2b240f5b89c7, chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=09ecb7c4-8334-4e9d-8fbc-d238d1a73476) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.167 107302 INFO neutron.agent.ovn.metadata.agent [-] Port 09ecb7c4-8334-4e9d-8fbc-d238d1a73476 in datapath 52166b3a-c6fd-46c4-9a20-10228c0e8119 bound to our chassis
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.169 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52166b3a-c6fd-46c4-9a20-10228c0e8119
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.181 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[85b5d565-a3b1-4227-9afe-34f8c2e10bdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.182 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52166b3a-c1 in ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.185 238735 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52166b3a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.185 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fb4927-b6c3-42c4-8f34-649e0e1c4822]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.186 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[d740e1c2-4907-4aba-9662-1747f53ea811]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.201 107797 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5193cb-d27a-4fdb-8e1f-738ef0671be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.227 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcbcccc-c84b-439c-ae7d-4969232b0aad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.255 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[44d8c91f-5239-4b4d-a1cc-a0e4b1bb523c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 NetworkManager[56600]: <info>  [1769555156.2665] manager: (tap52166b3a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.269 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d6381d-afc5-4273-b3e7-c2f59fa00120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.304 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[3663eef4-6653-4dca-8444-712893a25c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.307 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6a2824-fb30-4db2-98de-071cf6f1bbef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 NetworkManager[56600]: <info>  [1769555156.3326] device (tap52166b3a-c0): carrier: link connected
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.339 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[36df4081-679a-45bb-89bd-63e4532693d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.367 185654 DEBUG oslo_concurrency.lockutils [None req-d0cc89e0-dc96-4fc1-a0c8-86b7dcb14628 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lock "9033d5a6-ab60-43e3-bbcb-3a8b83161c58" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.383 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[c344ccc1-bee9-4740-ae76-c99cbf0a7939]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52166b3a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:49:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498792, 'reachable_time': 41981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248994, 'error': None, 'target': 'ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.404 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[38ee76e7-dabd-4e5a-bb4f-d347e261be0c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:4976'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498792, 'tstamp': 498792}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248995, 'error': None, 'target': 'ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.424 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd90e98-1da7-44ec-b3f4-5e45cdfd5c5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52166b3a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:49:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498792, 'reachable_time': 41981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248996, 'error': None, 'target': 'ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.459 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[b51235f5-ec21-4462-accc-31f4d8a071eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.516 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[d44f705c-dba3-49a3-90db-cf300afbdccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.518 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52166b3a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.518 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.518 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52166b3a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:56 compute-0 kernel: tap52166b3a-c0: entered promiscuous mode
Jan 27 23:05:56 compute-0 NetworkManager[56600]: <info>  [1769555156.5217] manager: (tap52166b3a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.525 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.541 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52166b3a-c0, col_values=(('external_ids', {'iface-id': '921266f6-a26d-4b9c-9f31-d1b9bea161ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:05:56 compute-0 ovn_controller[98048]: 2026-01-27T23:05:56Z|00087|binding|INFO|Releasing lport 921266f6-a26d-4b9c-9f31-d1b9bea161ea from this chassis (sb_readonly=0)
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.554 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.558 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.557 107302 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52166b3a-c6fd-46c4-9a20-10228c0e8119.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52166b3a-c6fd-46c4-9a20-10228c0e8119.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.564 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[8e120ef0-6d13-4a32-a756-2f57fb6082b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.565 107302 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: global
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     log         /dev/log local0 debug
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     log-tag     haproxy-metadata-proxy-52166b3a-c6fd-46c4-9a20-10228c0e8119
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     user        root
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     group       root
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     maxconn     1024
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     pidfile     /var/lib/neutron/external/pids/52166b3a-c6fd-46c4-9a20-10228c0e8119.pid.haproxy
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     daemon
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: defaults
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     log global
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     mode http
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     option httplog
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     option dontlognull
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     option http-server-close
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     option forwardfor
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     retries                 3
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     timeout http-request    30s
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     timeout connect         30s
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     timeout client          32s
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     timeout server          32s
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     timeout http-keep-alive 30s
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: listen listener
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     bind 169.254.169.254:80
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:     http-request add-header X-OVN-Network-ID 52166b3a-c6fd-46c4-9a20-10228c0e8119
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 23:05:56 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:05:56.568 107302 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119', 'env', 'PROCESS_TAG=haproxy-52166b3a-c6fd-46c4-9a20-10228c0e8119', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52166b3a-c6fd-46c4-9a20-10228c0e8119.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.690 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555156.6903727, a5213d25-e31d-4018-991a-ffcc9a3cf495 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.691 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] VM Started (Lifecycle Event)
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.874 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.884 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555156.6904755, a5213d25-e31d-4018-991a-ffcc9a3cf495 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:56 compute-0 nova_compute[185650]: 2026-01-27 23:05:56.885 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] VM Paused (Lifecycle Event)
Jan 27 23:05:57 compute-0 nova_compute[185650]: 2026-01-27 23:05:57.089 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:57 compute-0 nova_compute[185650]: 2026-01-27 23:05:57.464 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 23:05:57 compute-0 nova_compute[185650]: 2026-01-27 23:05:57.535 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 23:05:57 compute-0 podman[249035]: 2026-01-27 23:05:57.558441798 +0000 UTC m=+0.070277162 container create 58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 23:05:57 compute-0 nova_compute[185650]: 2026-01-27 23:05:57.562 185654 DEBUG nova.network.neutron [req-4317266c-21e7-4d69-a1a2-c29aa44aee52 req-922a59bf-e618-4360-8996-4ac99357227f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Updated VIF entry in instance network info cache for port 09ecb7c4-8334-4e9d-8fbc-d238d1a73476. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 23:05:57 compute-0 nova_compute[185650]: 2026-01-27 23:05:57.563 185654 DEBUG nova.network.neutron [req-4317266c-21e7-4d69-a1a2-c29aa44aee52 req-922a59bf-e618-4360-8996-4ac99357227f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Updating instance_info_cache with network_info: [{"id": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "address": "fa:16:3e:3e:5b:de", "network": {"id": "52166b3a-c6fd-46c4-9a20-10228c0e8119", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-393615870-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1841b657d00c42cba8cf6368908d3e05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ecb7c4-83", "ovs_interfaceid": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:05:57 compute-0 podman[249035]: 2026-01-27 23:05:57.522811979 +0000 UTC m=+0.034647363 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 23:05:57 compute-0 systemd[1]: Started libpod-conmon-58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a.scope.
Jan 27 23:05:57 compute-0 nova_compute[185650]: 2026-01-27 23:05:57.624 185654 DEBUG oslo_concurrency.lockutils [req-4317266c-21e7-4d69-a1a2-c29aa44aee52 req-922a59bf-e618-4360-8996-4ac99357227f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-a5213d25-e31d-4018-991a-ffcc9a3cf495" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:05:57 compute-0 systemd[1]: Started libcrun container.
Jan 27 23:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a73079b7a18b1b5718fbbf5164dd2db91daea6a2fffe5f5bcf820f2b84766a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 23:05:57 compute-0 podman[249035]: 2026-01-27 23:05:57.670923613 +0000 UTC m=+0.182758997 container init 58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 27 23:05:57 compute-0 podman[249035]: 2026-01-27 23:05:57.679253565 +0000 UTC m=+0.191088929 container start 58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 27 23:05:57 compute-0 neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119[249050]: [NOTICE]   (249055) : New worker (249057) forked
Jan 27 23:05:57 compute-0 neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119[249050]: [NOTICE]   (249055) : Loading success.
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.530 185654 DEBUG nova.compute.manager [req-6afa3817-43de-4c20-a159-740f95277fa5 req-cda5db00-5ecf-4c0d-8edd-38cd1a231e2a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Received event network-vif-plugged-063f8734-c708-4ac4-90bf-5a2100f150c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.531 185654 DEBUG oslo_concurrency.lockutils [req-6afa3817-43de-4c20-a159-740f95277fa5 req-cda5db00-5ecf-4c0d-8edd-38cd1a231e2a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.531 185654 DEBUG oslo_concurrency.lockutils [req-6afa3817-43de-4c20-a159-740f95277fa5 req-cda5db00-5ecf-4c0d-8edd-38cd1a231e2a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.531 185654 DEBUG oslo_concurrency.lockutils [req-6afa3817-43de-4c20-a159-740f95277fa5 req-cda5db00-5ecf-4c0d-8edd-38cd1a231e2a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.531 185654 DEBUG nova.compute.manager [req-6afa3817-43de-4c20-a159-740f95277fa5 req-cda5db00-5ecf-4c0d-8edd-38cd1a231e2a b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Processing event network-vif-plugged-063f8734-c708-4ac4-90bf-5a2100f150c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.532 185654 DEBUG nova.compute.manager [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.547 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555158.5369544, 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.547 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] VM Resumed (Lifecycle Event)
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.549 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.555 185654 INFO nova.virt.libvirt.driver [-] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Instance spawned successfully.
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.555 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.578 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.579 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.579 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.580 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.580 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.580 185654 DEBUG nova.virt.libvirt.driver [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.620 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.624 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.638 185654 DEBUG nova.compute.manager [req-c1af78db-eab6-4410-97ce-90d361eb2b37 req-d999313b-5fe7-4be6-ba0e-6bf930bf5bb2 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Received event network-vif-plugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.638 185654 DEBUG oslo_concurrency.lockutils [req-c1af78db-eab6-4410-97ce-90d361eb2b37 req-d999313b-5fe7-4be6-ba0e-6bf930bf5bb2 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.639 185654 DEBUG oslo_concurrency.lockutils [req-c1af78db-eab6-4410-97ce-90d361eb2b37 req-d999313b-5fe7-4be6-ba0e-6bf930bf5bb2 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.639 185654 DEBUG oslo_concurrency.lockutils [req-c1af78db-eab6-4410-97ce-90d361eb2b37 req-d999313b-5fe7-4be6-ba0e-6bf930bf5bb2 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.639 185654 DEBUG nova.compute.manager [req-c1af78db-eab6-4410-97ce-90d361eb2b37 req-d999313b-5fe7-4be6-ba0e-6bf930bf5bb2 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Processing event network-vif-plugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.639 185654 DEBUG nova.compute.manager [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.647 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.649 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.650 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555158.6436417, a5213d25-e31d-4018-991a-ffcc9a3cf495 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.650 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] VM Resumed (Lifecycle Event)
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.661 185654 INFO nova.virt.libvirt.driver [-] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Instance spawned successfully.
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.661 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.665 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.671 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.681 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.682 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.682 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.683 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.683 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.684 185654 DEBUG nova.virt.libvirt.driver [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.690 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.748 185654 INFO nova.compute.manager [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Took 15.21 seconds to spawn the instance on the hypervisor.
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.748 185654 DEBUG nova.compute.manager [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.783 185654 INFO nova.compute.manager [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Took 14.61 seconds to spawn the instance on the hypervisor.
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.784 185654 DEBUG nova.compute.manager [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.903 185654 INFO nova.compute.manager [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Took 16.19 seconds to build instance.
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.923 185654 INFO nova.compute.manager [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Took 15.31 seconds to build instance.
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.961 185654 DEBUG oslo_concurrency.lockutils [None req-6dbb0d07-93ca-46f7-84b5-774b87fd203f 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:58 compute-0 nova_compute[185650]: 2026-01-27 23:05:58.962 185654 DEBUG oslo_concurrency.lockutils [None req-3cf73b79-1d82-418e-a645-77f87a383fbf b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:05:59 compute-0 nova_compute[185650]: 2026-01-27 23:05:59.308 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:59 compute-0 NetworkManager[56600]: <info>  [1769555159.3547] manager: (patch-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 27 23:05:59 compute-0 NetworkManager[56600]: <info>  [1769555159.3554] manager: (patch-br-int-to-provnet-d119fa92-bef8-49e6-a71b-dd674f01104f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 27 23:05:59 compute-0 nova_compute[185650]: 2026-01-27 23:05:59.358 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:59 compute-0 nova_compute[185650]: 2026-01-27 23:05:59.435 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:59 compute-0 ovn_controller[98048]: 2026-01-27T23:05:59Z|00088|binding|INFO|Releasing lport 05067d9f-6d8c-4f3c-a42f-40ac1462630e from this chassis (sb_readonly=0)
Jan 27 23:05:59 compute-0 ovn_controller[98048]: 2026-01-27T23:05:59Z|00089|binding|INFO|Releasing lport babee362-409a-4d1f-bc47-c6a6dce734ff from this chassis (sb_readonly=0)
Jan 27 23:05:59 compute-0 ovn_controller[98048]: 2026-01-27T23:05:59Z|00090|binding|INFO|Releasing lport 921266f6-a26d-4b9c-9f31-d1b9bea161ea from this chassis (sb_readonly=0)
Jan 27 23:05:59 compute-0 ovn_controller[98048]: 2026-01-27T23:05:59Z|00091|binding|INFO|Releasing lport 41776a65-3925-474f-a135-3e28059d7e34 from this chassis (sb_readonly=0)
Jan 27 23:05:59 compute-0 nova_compute[185650]: 2026-01-27 23:05:59.453 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:05:59 compute-0 podman[201529]: time="2026-01-27T23:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:05:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32206 "" "Go-http-client/1.1"
Jan 27 23:05:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5762 "" "Go-http-client/1.1"
Jan 27 23:06:00 compute-0 nova_compute[185650]: 2026-01-27 23:06:00.579 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:00 compute-0 nova_compute[185650]: 2026-01-27 23:06:00.904 185654 DEBUG nova.compute.manager [req-dd9e7f88-dab2-413e-8877-b0396c2a02c1 req-61d02df7-a200-43d6-b693-7f466240db87 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Received event network-vif-plugged-063f8734-c708-4ac4-90bf-5a2100f150c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:00 compute-0 nova_compute[185650]: 2026-01-27 23:06:00.904 185654 DEBUG oslo_concurrency.lockutils [req-dd9e7f88-dab2-413e-8877-b0396c2a02c1 req-61d02df7-a200-43d6-b693-7f466240db87 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:00 compute-0 nova_compute[185650]: 2026-01-27 23:06:00.904 185654 DEBUG oslo_concurrency.lockutils [req-dd9e7f88-dab2-413e-8877-b0396c2a02c1 req-61d02df7-a200-43d6-b693-7f466240db87 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:00 compute-0 nova_compute[185650]: 2026-01-27 23:06:00.905 185654 DEBUG oslo_concurrency.lockutils [req-dd9e7f88-dab2-413e-8877-b0396c2a02c1 req-61d02df7-a200-43d6-b693-7f466240db87 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:00 compute-0 nova_compute[185650]: 2026-01-27 23:06:00.905 185654 DEBUG nova.compute.manager [req-dd9e7f88-dab2-413e-8877-b0396c2a02c1 req-61d02df7-a200-43d6-b693-7f466240db87 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] No waiting events found dispatching network-vif-plugged-063f8734-c708-4ac4-90bf-5a2100f150c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:06:00 compute-0 nova_compute[185650]: 2026-01-27 23:06:00.905 185654 WARNING nova.compute.manager [req-dd9e7f88-dab2-413e-8877-b0396c2a02c1 req-61d02df7-a200-43d6-b693-7f466240db87 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Received unexpected event network-vif-plugged-063f8734-c708-4ac4-90bf-5a2100f150c8 for instance with vm_state active and task_state None.
Jan 27 23:06:00 compute-0 ovn_controller[98048]: 2026-01-27T23:06:00Z|00092|binding|INFO|Releasing lport 05067d9f-6d8c-4f3c-a42f-40ac1462630e from this chassis (sb_readonly=0)
Jan 27 23:06:00 compute-0 ovn_controller[98048]: 2026-01-27T23:06:00Z|00093|binding|INFO|Releasing lport babee362-409a-4d1f-bc47-c6a6dce734ff from this chassis (sb_readonly=0)
Jan 27 23:06:00 compute-0 ovn_controller[98048]: 2026-01-27T23:06:00Z|00094|binding|INFO|Releasing lport 921266f6-a26d-4b9c-9f31-d1b9bea161ea from this chassis (sb_readonly=0)
Jan 27 23:06:00 compute-0 ovn_controller[98048]: 2026-01-27T23:06:00Z|00095|binding|INFO|Releasing lport 41776a65-3925-474f-a135-3e28059d7e34 from this chassis (sb_readonly=0)
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.020 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.265 185654 DEBUG nova.compute.manager [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Received event network-vif-plugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.265 185654 DEBUG oslo_concurrency.lockutils [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.266 185654 DEBUG oslo_concurrency.lockutils [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.266 185654 DEBUG oslo_concurrency.lockutils [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.266 185654 DEBUG nova.compute.manager [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] No waiting events found dispatching network-vif-plugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.270 185654 WARNING nova.compute.manager [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Received unexpected event network-vif-plugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 for instance with vm_state active and task_state None.
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.273 185654 DEBUG nova.compute.manager [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Received event network-changed-64b86a6b-6de4-4fee-917e-229794042e8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.276 185654 DEBUG nova.compute.manager [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Refreshing instance network info cache due to event network-changed-64b86a6b-6de4-4fee-917e-229794042e8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.277 185654 DEBUG oslo_concurrency.lockutils [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.278 185654 DEBUG oslo_concurrency.lockutils [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.279 185654 DEBUG nova.network.neutron [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Refreshing network info cache for port 64b86a6b-6de4-4fee-917e-229794042e8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 23:06:01 compute-0 openstack_network_exporter[204648]: ERROR   23:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:06:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:06:01 compute-0 openstack_network_exporter[204648]: ERROR   23:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:06:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.844 185654 DEBUG oslo_concurrency.lockutils [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Acquiring lock "a5213d25-e31d-4018-991a-ffcc9a3cf495" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.845 185654 DEBUG oslo_concurrency.lockutils [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.845 185654 DEBUG oslo_concurrency.lockutils [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Acquiring lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.845 185654 DEBUG oslo_concurrency.lockutils [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.846 185654 DEBUG oslo_concurrency.lockutils [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.847 185654 INFO nova.compute.manager [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Terminating instance
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.848 185654 DEBUG nova.compute.manager [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 23:06:01 compute-0 kernel: tap09ecb7c4-83 (unregistering): left promiscuous mode
Jan 27 23:06:01 compute-0 NetworkManager[56600]: <info>  [1769555161.8918] device (tap09ecb7c4-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 23:06:01 compute-0 ovn_controller[98048]: 2026-01-27T23:06:01Z|00096|binding|INFO|Releasing lport 09ecb7c4-8334-4e9d-8fbc-d238d1a73476 from this chassis (sb_readonly=0)
Jan 27 23:06:01 compute-0 ovn_controller[98048]: 2026-01-27T23:06:01Z|00097|binding|INFO|Setting lport 09ecb7c4-8334-4e9d-8fbc-d238d1a73476 down in Southbound
Jan 27 23:06:01 compute-0 ovn_controller[98048]: 2026-01-27T23:06:01Z|00098|binding|INFO|Removing iface tap09ecb7c4-83 ovn-installed in OVS
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.901 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:01 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:01.926 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:5b:de 10.100.0.10'], port_security=['fa:16:3e:3e:5b:de 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a5213d25-e31d-4018-991a-ffcc9a3cf495', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52166b3a-c6fd-46c4-9a20-10228c0e8119', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1841b657d00c42cba8cf6368908d3e05', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a25c1474-cdc4-45ab-8ec7-32034a13f236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d06d7c56-fdba-4ff8-88aa-2b240f5b89c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=09ecb7c4-8334-4e9d-8fbc-d238d1a73476) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:06:01 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:01.927 107302 INFO neutron.agent.ovn.metadata.agent [-] Port 09ecb7c4-8334-4e9d-8fbc-d238d1a73476 in datapath 52166b3a-c6fd-46c4-9a20-10228c0e8119 unbound from our chassis
Jan 27 23:06:01 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:01.929 107302 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52166b3a-c6fd-46c4-9a20-10228c0e8119, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 23:06:01 compute-0 nova_compute[185650]: 2026-01-27 23:06:01.931 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:01 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:01.932 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[186d6ede-4894-441b-a58f-b7e2b94346af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:01 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:01.933 107302 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119 namespace which is not needed anymore
Jan 27 23:06:01 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 27 23:06:01 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 3.808s CPU time.
Jan 27 23:06:01 compute-0 systemd-machined[157036]: Machine qemu-9-instance-00000009 terminated.
Jan 27 23:06:02 compute-0 podman[249069]: 2026-01-27 23:06:02.033421788 +0000 UTC m=+0.126979102 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 23:06:02 compute-0 neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119[249050]: [NOTICE]   (249055) : haproxy version is 2.8.14-c23fe91
Jan 27 23:06:02 compute-0 neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119[249050]: [NOTICE]   (249055) : path to executable is /usr/sbin/haproxy
Jan 27 23:06:02 compute-0 neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119[249050]: [WARNING]  (249055) : Exiting Master process...
Jan 27 23:06:02 compute-0 neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119[249050]: [WARNING]  (249055) : Exiting Master process...
Jan 27 23:06:02 compute-0 neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119[249050]: [ALERT]    (249055) : Current worker (249057) exited with code 143 (Terminated)
Jan 27 23:06:02 compute-0 neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119[249050]: [WARNING]  (249055) : All workers exited. Exiting... (0)
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.151 185654 INFO nova.virt.libvirt.driver [-] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Instance destroyed successfully.
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.152 185654 DEBUG nova.objects.instance [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lazy-loading 'resources' on Instance uuid a5213d25-e31d-4018-991a-ffcc9a3cf495 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:06:02 compute-0 systemd[1]: libpod-58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a.scope: Deactivated successfully.
Jan 27 23:06:02 compute-0 podman[249115]: 2026-01-27 23:06:02.157629076 +0000 UTC m=+0.098796882 container died 58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.168 185654 DEBUG nova.virt.libvirt.vif [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T23:05:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1101404520',display_name='tempest-ServerAddressesTestJSON-server-1101404520',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1101404520',id=9,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-27T23:05:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1841b657d00c42cba8cf6368908d3e05',ramdisk_id='',reservation_id='r-zhd9h390',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1567840329',owner_user_name='tempest-ServerAddressesTestJSON-1567840329-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T23:05:58Z,user_data=None,user_id='97de12b7dcf64c95a6ef85a1de71a992',uuid=a5213d25-e31d-4018-991a-ffcc9a3cf495,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "address": "fa:16:3e:3e:5b:de", "network": {"id": "52166b3a-c6fd-46c4-9a20-10228c0e8119", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-393615870-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1841b657d00c42cba8cf6368908d3e05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ecb7c4-83", "ovs_interfaceid": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.169 185654 DEBUG nova.network.os_vif_util [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Converting VIF {"id": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "address": "fa:16:3e:3e:5b:de", "network": {"id": "52166b3a-c6fd-46c4-9a20-10228c0e8119", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-393615870-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1841b657d00c42cba8cf6368908d3e05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ecb7c4-83", "ovs_interfaceid": "09ecb7c4-8334-4e9d-8fbc-d238d1a73476", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.170 185654 DEBUG nova.network.os_vif_util [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:5b:de,bridge_name='br-int',has_traffic_filtering=True,id=09ecb7c4-8334-4e9d-8fbc-d238d1a73476,network=Network(52166b3a-c6fd-46c4-9a20-10228c0e8119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ecb7c4-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.170 185654 DEBUG os_vif [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:5b:de,bridge_name='br-int',has_traffic_filtering=True,id=09ecb7c4-8334-4e9d-8fbc-d238d1a73476,network=Network(52166b3a-c6fd-46c4-9a20-10228c0e8119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ecb7c4-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.171 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.172 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ecb7c4-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.174 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.176 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.179 185654 INFO os_vif [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:5b:de,bridge_name='br-int',has_traffic_filtering=True,id=09ecb7c4-8334-4e9d-8fbc-d238d1a73476,network=Network(52166b3a-c6fd-46c4-9a20-10228c0e8119),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ecb7c4-83')
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.180 185654 INFO nova.virt.libvirt.driver [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Deleting instance files /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495_del
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.180 185654 INFO nova.virt.libvirt.driver [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Deletion of /var/lib/nova/instances/a5213d25-e31d-4018-991a-ffcc9a3cf495_del complete
Jan 27 23:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a-userdata-shm.mount: Deactivated successfully.
Jan 27 23:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a73079b7a18b1b5718fbbf5164dd2db91daea6a2fffe5f5bcf820f2b84766a6-merged.mount: Deactivated successfully.
Jan 27 23:06:02 compute-0 podman[249115]: 2026-01-27 23:06:02.263792924 +0000 UTC m=+0.204960730 container cleanup 58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 27 23:06:02 compute-0 systemd[1]: libpod-conmon-58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a.scope: Deactivated successfully.
Jan 27 23:06:02 compute-0 podman[249161]: 2026-01-27 23:06:02.361812814 +0000 UTC m=+0.072422170 container remove 58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 27 23:06:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:02.370 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[8ada8722-decc-45cc-8317-d240251b9a6d]: (4, ('Tue Jan 27 11:06:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119 (58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a)\n58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a\nTue Jan 27 11:06:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119 (58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a)\n58a5437dc90aa9a1e487b5bcf055824bd5acc72e83167f66ffeadbb1165fe26a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:02.373 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[75fbdd4a-953a-4cad-8a5a-b9a8cfb3c3b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:02.375 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52166b3a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:06:02 compute-0 kernel: tap52166b3a-c0: left promiscuous mode
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.395 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:02.398 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[3a0212d3-545b-4f21-ad2b-7025a541b676]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:02.421 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6f7ad9-171f-4847-a8ca-43cf22f6355b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:02.425 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee21a96-9a2b-4605-b60a-a710e0fecbc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.430 185654 INFO nova.compute.manager [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Took 0.58 seconds to destroy the instance on the hypervisor.
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.431 185654 DEBUG oslo.service.loopingcall [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.431 185654 DEBUG nova.compute.manager [-] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 23:06:02 compute-0 nova_compute[185650]: 2026-01-27 23:06:02.431 185654 DEBUG nova.network.neutron [-] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 23:06:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:02.447 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2a0b44-98dd-43d2-88ff-7268382cd74c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498784, 'reachable_time': 26566, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249176, 'error': None, 'target': 'ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d52166b3a\x2dc6fd\x2d46c4\x2d9a20\x2d10228c0e8119.mount: Deactivated successfully.
Jan 27 23:06:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:02.454 107797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52166b3a-c6fd-46c4-9a20-10228c0e8119 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 23:06:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:02.454 107797 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c033b4-86f5-4ab4-ad6a-114bef48869a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.016 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.057 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.058 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.058 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.058 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.148 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.226 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.227 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.296 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.305 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.378 185654 DEBUG nova.compute.manager [req-fedea769-3eed-4d16-acd0-7f5de0d91107 req-e3299735-18a5-434d-b6ff-ef2ddbf2b415 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Received event network-changed-5c31fe8e-f952-4e71-b32a-ec4759a7fc07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.379 185654 DEBUG nova.compute.manager [req-fedea769-3eed-4d16-acd0-7f5de0d91107 req-e3299735-18a5-434d-b6ff-ef2ddbf2b415 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Refreshing instance network info cache due to event network-changed-5c31fe8e-f952-4e71-b32a-ec4759a7fc07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.380 185654 DEBUG oslo_concurrency.lockutils [req-fedea769-3eed-4d16-acd0-7f5de0d91107 req-e3299735-18a5-434d-b6ff-ef2ddbf2b415 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.380 185654 DEBUG oslo_concurrency.lockutils [req-fedea769-3eed-4d16-acd0-7f5de0d91107 req-e3299735-18a5-434d-b6ff-ef2ddbf2b415 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.380 185654 DEBUG nova.network.neutron [req-fedea769-3eed-4d16-acd0-7f5de0d91107 req-e3299735-18a5-434d-b6ff-ef2ddbf2b415 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Refreshing network info cache for port 5c31fe8e-f952-4e71-b32a-ec4759a7fc07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.387 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.388 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.457 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.465 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.489 185654 DEBUG nova.compute.manager [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Received event network-vif-unplugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.492 185654 DEBUG oslo_concurrency.lockutils [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.492 185654 DEBUG oslo_concurrency.lockutils [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.493 185654 DEBUG oslo_concurrency.lockutils [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.493 185654 DEBUG nova.compute.manager [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] No waiting events found dispatching network-vif-unplugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.493 185654 DEBUG nova.compute.manager [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Received event network-vif-unplugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.494 185654 DEBUG nova.compute.manager [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Received event network-vif-plugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.494 185654 DEBUG oslo_concurrency.lockutils [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.494 185654 DEBUG oslo_concurrency.lockutils [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.495 185654 DEBUG oslo_concurrency.lockutils [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.495 185654 DEBUG nova.compute.manager [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] No waiting events found dispatching network-vif-plugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.495 185654 WARNING nova.compute.manager [req-7f20fce8-f48b-4d7d-9c67-34c029eec03e req-563775a5-be62-474e-a289-89973c609a0b b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Received unexpected event network-vif-plugged-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 for instance with vm_state active and task_state deleting.
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.529 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.531 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.608 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.708 185654 DEBUG nova.network.neutron [-] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.748 185654 INFO nova.compute.manager [-] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Took 1.32 seconds to deallocate network for instance.
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.839 185654 DEBUG oslo_concurrency.lockutils [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:03 compute-0 nova_compute[185650]: 2026-01-27 23:06:03.840 185654 DEBUG oslo_concurrency.lockutils [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.127 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.129 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4900MB free_disk=72.37685012817383GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.129 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:04.161 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:04.161 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:04.162 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.233 185654 DEBUG nova.compute.provider_tree [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.253 185654 DEBUG nova.scheduler.client.report [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.286 185654 DEBUG oslo_concurrency.lockutils [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.289 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.473 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 66eb7f87-9511-4da7-8733-ef0673cfab67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.475 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 9033d5a6-ab60-43e3-bbcb-3a8b83161c58 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.476 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.476 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.476 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.481 185654 INFO nova.scheduler.client.report [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Deleted allocations for instance a5213d25-e31d-4018-991a-ffcc9a3cf495
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.581 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.588 185654 DEBUG oslo_concurrency.lockutils [None req-71f1ac45-fdae-4522-99d7-972c4df75cdc 97de12b7dcf64c95a6ef85a1de71a992 1841b657d00c42cba8cf6368908d3e05 - - default default] Lock "a5213d25-e31d-4018-991a-ffcc9a3cf495" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.590 185654 DEBUG nova.network.neutron [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Updated VIF entry in instance network info cache for port 64b86a6b-6de4-4fee-917e-229794042e8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.591 185654 DEBUG nova.network.neutron [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Updating instance_info_cache with network_info: [{"id": "64b86a6b-6de4-4fee-917e-229794042e8e", "address": "fa:16:3e:23:60:c6", "network": {"id": "6d0f9d9e-8cd6-4a68-8926-de88e69f60d4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1504245290-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "270690dca2514a49843b866111c87d39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b86a6b-6d", "ovs_interfaceid": "64b86a6b-6de4-4fee-917e-229794042e8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.618 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.647 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.648 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:04 compute-0 nova_compute[185650]: 2026-01-27 23:06:04.674 185654 DEBUG oslo_concurrency.lockutils [req-ccc45f70-cff8-40e7-80fd-f3e2c09f7e42 req-d75e0c1b-4c14-4c36-95b3-5d46a8dd7a45 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:06:05 compute-0 podman[249196]: 2026-01-27 23:06:05.381563681 +0000 UTC m=+0.084505162 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 27 23:06:05 compute-0 nova_compute[185650]: 2026-01-27 23:06:05.580 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:05 compute-0 nova_compute[185650]: 2026-01-27 23:06:05.763 185654 DEBUG nova.compute.manager [req-bbcfa4a2-4c65-4414-899c-53c1c5c5c6fd req-cf69649a-6380-4929-92f4-b77898717945 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Received event network-vif-deleted-09ecb7c4-8334-4e9d-8fbc-d238d1a73476 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:05 compute-0 nova_compute[185650]: 2026-01-27 23:06:05.764 185654 DEBUG nova.compute.manager [req-bbcfa4a2-4c65-4414-899c-53c1c5c5c6fd req-cf69649a-6380-4929-92f4-b77898717945 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Received event network-changed-063f8734-c708-4ac4-90bf-5a2100f150c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:05 compute-0 nova_compute[185650]: 2026-01-27 23:06:05.764 185654 DEBUG nova.compute.manager [req-bbcfa4a2-4c65-4414-899c-53c1c5c5c6fd req-cf69649a-6380-4929-92f4-b77898717945 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Refreshing instance network info cache due to event network-changed-063f8734-c708-4ac4-90bf-5a2100f150c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 23:06:05 compute-0 nova_compute[185650]: 2026-01-27 23:06:05.765 185654 DEBUG oslo_concurrency.lockutils [req-bbcfa4a2-4c65-4414-899c-53c1c5c5c6fd req-cf69649a-6380-4929-92f4-b77898717945 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:06:05 compute-0 nova_compute[185650]: 2026-01-27 23:06:05.765 185654 DEBUG oslo_concurrency.lockutils [req-bbcfa4a2-4c65-4414-899c-53c1c5c5c6fd req-cf69649a-6380-4929-92f4-b77898717945 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:06:05 compute-0 nova_compute[185650]: 2026-01-27 23:06:05.765 185654 DEBUG nova.network.neutron [req-bbcfa4a2-4c65-4414-899c-53c1c5c5c6fd req-cf69649a-6380-4929-92f4-b77898717945 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Refreshing network info cache for port 063f8734-c708-4ac4-90bf-5a2100f150c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 23:06:05 compute-0 nova_compute[185650]: 2026-01-27 23:06:05.886 185654 DEBUG nova.network.neutron [req-fedea769-3eed-4d16-acd0-7f5de0d91107 req-e3299735-18a5-434d-b6ff-ef2ddbf2b415 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Updated VIF entry in instance network info cache for port 5c31fe8e-f952-4e71-b32a-ec4759a7fc07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 23:06:05 compute-0 nova_compute[185650]: 2026-01-27 23:06:05.887 185654 DEBUG nova.network.neutron [req-fedea769-3eed-4d16-acd0-7f5de0d91107 req-e3299735-18a5-434d-b6ff-ef2ddbf2b415 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Updating instance_info_cache with network_info: [{"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:06:05 compute-0 nova_compute[185650]: 2026-01-27 23:06:05.942 185654 DEBUG oslo_concurrency.lockutils [req-fedea769-3eed-4d16-acd0-7f5de0d91107 req-e3299735-18a5-434d-b6ff-ef2ddbf2b415 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.382 185654 DEBUG oslo_concurrency.lockutils [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Acquiring lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.383 185654 DEBUG oslo_concurrency.lockutils [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.383 185654 DEBUG oslo_concurrency.lockutils [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Acquiring lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.383 185654 DEBUG oslo_concurrency.lockutils [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.383 185654 DEBUG oslo_concurrency.lockutils [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.384 185654 INFO nova.compute.manager [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Terminating instance
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.385 185654 DEBUG nova.compute.manager [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 27 23:06:06 compute-0 kernel: tap063f8734-c7 (unregistering): left promiscuous mode
Jan 27 23:06:06 compute-0 NetworkManager[56600]: <info>  [1769555166.4143] device (tap063f8734-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 27 23:06:06 compute-0 ovn_controller[98048]: 2026-01-27T23:06:06Z|00099|binding|INFO|Releasing lport 063f8734-c708-4ac4-90bf-5a2100f150c8 from this chassis (sb_readonly=0)
Jan 27 23:06:06 compute-0 ovn_controller[98048]: 2026-01-27T23:06:06Z|00100|binding|INFO|Setting lport 063f8734-c708-4ac4-90bf-5a2100f150c8 down in Southbound
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.427 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:06 compute-0 ovn_controller[98048]: 2026-01-27T23:06:06Z|00101|binding|INFO|Removing iface tap063f8734-c7 ovn-installed in OVS
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.433 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.443 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:3b:73 10.100.0.9'], port_security=['fa:16:3e:c8:3b:73 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce133cd3-da57-40b5-95f2-7f015476df55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '99d030bedd674ca8aef409ccc5f31fd2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f02aeabb-306c-4ede-94b2-5a8ec614cb76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=692d1d1d-750e-40d5-9626-e11cb5b102db, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=063f8734-c708-4ac4-90bf-5a2100f150c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.444 107302 INFO neutron.agent.ovn.metadata.agent [-] Port 063f8734-c708-4ac4-90bf-5a2100f150c8 in datapath ce133cd3-da57-40b5-95f2-7f015476df55 unbound from our chassis
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.447 107302 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce133cd3-da57-40b5-95f2-7f015476df55, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.447 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.448 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3677a8-05d4-4764-8dbd-3f0819d722da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.458 107302 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55 namespace which is not needed anymore
Jan 27 23:06:06 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 27 23:06:06 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 8.476s CPU time.
Jan 27 23:06:06 compute-0 systemd-machined[157036]: Machine qemu-8-instance-00000008 terminated.
Jan 27 23:06:06 compute-0 podman[249218]: 2026-01-27 23:06:06.55236169 +0000 UTC m=+0.084506343 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_id=kepler, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, version=9.4, vendor=Red Hat, Inc.)
Jan 27 23:06:06 compute-0 neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55[248888]: [NOTICE]   (248892) : haproxy version is 2.8.14-c23fe91
Jan 27 23:06:06 compute-0 neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55[248888]: [NOTICE]   (248892) : path to executable is /usr/sbin/haproxy
Jan 27 23:06:06 compute-0 neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55[248888]: [WARNING]  (248892) : Exiting Master process...
Jan 27 23:06:06 compute-0 neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55[248888]: [WARNING]  (248892) : Exiting Master process...
Jan 27 23:06:06 compute-0 neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55[248888]: [ALERT]    (248892) : Current worker (248894) exited with code 143 (Terminated)
Jan 27 23:06:06 compute-0 neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55[248888]: [WARNING]  (248892) : All workers exited. Exiting... (0)
Jan 27 23:06:06 compute-0 systemd[1]: libpod-0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79.scope: Deactivated successfully.
Jan 27 23:06:06 compute-0 podman[249255]: 2026-01-27 23:06:06.62448181 +0000 UTC m=+0.061926960 container died 0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 27 23:06:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79-userdata-shm.mount: Deactivated successfully.
Jan 27 23:06:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-57a6b5148f31003b152a5646e3df7bff020019a8c9c708f35bdb3dcfdba17e55-merged.mount: Deactivated successfully.
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.699 185654 INFO nova.virt.libvirt.driver [-] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Instance destroyed successfully.
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.701 185654 DEBUG nova.objects.instance [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lazy-loading 'resources' on Instance uuid 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:06:06 compute-0 podman[249255]: 2026-01-27 23:06:06.719687875 +0000 UTC m=+0.157133025 container cleanup 0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 23:06:06 compute-0 podman[249257]: 2026-01-27 23:06:06.722071629 +0000 UTC m=+0.139721752 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.722 185654 DEBUG nova.virt.libvirt.vif [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-27T23:05:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-295068544',display_name='tempest-ServersTestJSON-server-295068544',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-295068544',id=8,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOduhWHyouHCtRXfH7MrLfcwd0dJphJOUMH0Qoms/901k0RmU1WUrglIpw5S6nBg+kWfRVhjfT3WaO1uhXYyDW7tFhwKehJxN/isuJfe7J5L2LEWwrpRzA11HbJZ3RMe8A==',key_name='tempest-keypair-139774848',keypairs=<?>,launch_index=0,launched_at=2026-01-27T23:05:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='99d030bedd674ca8aef409ccc5f31fd2',ramdisk_id='',reservation_id='r-nysq9n2c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1401357921',owner_user_name='tempest-ServersTestJSON-1401357921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-27T23:05:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b661812adddc45d4beba73ca32253b11',uuid=92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "063f8734-c708-4ac4-90bf-5a2100f150c8", "address": "fa:16:3e:c8:3b:73", "network": {"id": "ce133cd3-da57-40b5-95f2-7f015476df55", "bridge": "br-int", "label": "tempest-ServersTestJSON-441071384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99d030bedd674ca8aef409ccc5f31fd2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap063f8734-c7", "ovs_interfaceid": "063f8734-c708-4ac4-90bf-5a2100f150c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.723 185654 DEBUG nova.network.os_vif_util [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Converting VIF {"id": "063f8734-c708-4ac4-90bf-5a2100f150c8", "address": "fa:16:3e:c8:3b:73", "network": {"id": "ce133cd3-da57-40b5-95f2-7f015476df55", "bridge": "br-int", "label": "tempest-ServersTestJSON-441071384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99d030bedd674ca8aef409ccc5f31fd2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap063f8734-c7", "ovs_interfaceid": "063f8734-c708-4ac4-90bf-5a2100f150c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.724 185654 DEBUG nova.network.os_vif_util [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:3b:73,bridge_name='br-int',has_traffic_filtering=True,id=063f8734-c708-4ac4-90bf-5a2100f150c8,network=Network(ce133cd3-da57-40b5-95f2-7f015476df55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap063f8734-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.724 185654 DEBUG os_vif [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:3b:73,bridge_name='br-int',has_traffic_filtering=True,id=063f8734-c708-4ac4-90bf-5a2100f150c8,network=Network(ce133cd3-da57-40b5-95f2-7f015476df55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap063f8734-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.727 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.733 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap063f8734-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.735 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:06 compute-0 systemd[1]: libpod-conmon-0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79.scope: Deactivated successfully.
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.737 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.739 185654 INFO os_vif [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:3b:73,bridge_name='br-int',has_traffic_filtering=True,id=063f8734-c708-4ac4-90bf-5a2100f150c8,network=Network(ce133cd3-da57-40b5-95f2-7f015476df55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap063f8734-c7')
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.740 185654 INFO nova.virt.libvirt.driver [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Deleting instance files /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1_del
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.741 185654 INFO nova.virt.libvirt.driver [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Deletion of /var/lib/nova/instances/92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1_del complete
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.801 185654 INFO nova.compute.manager [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Took 0.42 seconds to destroy the instance on the hypervisor.
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.801 185654 DEBUG oslo.service.loopingcall [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.802 185654 DEBUG nova.compute.manager [-] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.802 185654 DEBUG nova.network.neutron [-] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 27 23:06:06 compute-0 podman[249324]: 2026-01-27 23:06:06.824477806 +0000 UTC m=+0.073505499 container remove 0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.833 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[f7459e10-0667-4178-8bb2-ddd6bc0991f3]: (4, ('Tue Jan 27 11:06:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55 (0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79)\n0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79\nTue Jan 27 11:06:06 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55 (0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79)\n0b25ec9ee6b6ff9d294bde656ec1f8bbf6ea8a46ebb9eba253084695efbe5c79\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.835 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[7c07daba-11d7-464b-ac6f-e94f496f2816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.836 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce133cd3-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.838 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:06 compute-0 kernel: tapce133cd3-d0: left promiscuous mode
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.855 185654 DEBUG nova.compute.manager [req-a15a4d37-09c7-4be1-b47c-04b808b1db1c req-4c1b60cb-9b3c-409f-99c0-15359a444ecb b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Received event network-vif-unplugged-063f8734-c708-4ac4-90bf-5a2100f150c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.856 185654 DEBUG oslo_concurrency.lockutils [req-a15a4d37-09c7-4be1-b47c-04b808b1db1c req-4c1b60cb-9b3c-409f-99c0-15359a444ecb b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.857 185654 DEBUG oslo_concurrency.lockutils [req-a15a4d37-09c7-4be1-b47c-04b808b1db1c req-4c1b60cb-9b3c-409f-99c0-15359a444ecb b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.857 185654 DEBUG oslo_concurrency.lockutils [req-a15a4d37-09c7-4be1-b47c-04b808b1db1c req-4c1b60cb-9b3c-409f-99c0-15359a444ecb b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.858 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[06918d98-3b7b-4e43-b802-a5dc73493f1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.858 185654 DEBUG nova.compute.manager [req-a15a4d37-09c7-4be1-b47c-04b808b1db1c req-4c1b60cb-9b3c-409f-99c0-15359a444ecb b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] No waiting events found dispatching network-vif-unplugged-063f8734-c708-4ac4-90bf-5a2100f150c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.859 185654 DEBUG nova.compute.manager [req-a15a4d37-09c7-4be1-b47c-04b808b1db1c req-4c1b60cb-9b3c-409f-99c0-15359a444ecb b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Received event network-vif-unplugged-063f8734-c708-4ac4-90bf-5a2100f150c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 27 23:06:06 compute-0 nova_compute[185650]: 2026-01-27 23:06:06.860 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.883 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[35f73c9d-be29-494a-a5de-64b6b7639d87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.884 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[ad418039-1bef-4c94-b508-deacee6ff2ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.899 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[bf95d9ed-dcbb-4c78-ba68-8ff6ae06fccc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498384, 'reachable_time': 34239, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249340, 'error': None, 'target': 'ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:06 compute-0 systemd[1]: run-netns-ovnmeta\x2dce133cd3\x2dda57\x2d40b5\x2d95f2\x2d7f015476df55.mount: Deactivated successfully.
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.904 107797 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce133cd3-da57-40b5-95f2-7f015476df55 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 27 23:06:06 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:06.904 107797 DEBUG oslo.privsep.daemon [-] privsep: reply[48bdc4c1-b153-4143-b54b-fdb57886a148]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.298 185654 DEBUG nova.network.neutron [req-bbcfa4a2-4c65-4414-899c-53c1c5c5c6fd req-cf69649a-6380-4929-92f4-b77898717945 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Updated VIF entry in instance network info cache for port 063f8734-c708-4ac4-90bf-5a2100f150c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.300 185654 DEBUG nova.network.neutron [req-bbcfa4a2-4c65-4414-899c-53c1c5c5c6fd req-cf69649a-6380-4929-92f4-b77898717945 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Updating instance_info_cache with network_info: [{"id": "063f8734-c708-4ac4-90bf-5a2100f150c8", "address": "fa:16:3e:c8:3b:73", "network": {"id": "ce133cd3-da57-40b5-95f2-7f015476df55", "bridge": "br-int", "label": "tempest-ServersTestJSON-441071384-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "99d030bedd674ca8aef409ccc5f31fd2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap063f8734-c7", "ovs_interfaceid": "063f8734-c708-4ac4-90bf-5a2100f150c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.335 185654 DEBUG oslo_concurrency.lockutils [req-bbcfa4a2-4c65-4414-899c-53c1c5c5c6fd req-cf69649a-6380-4929-92f4-b77898717945 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.626 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.627 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.627 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.650 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.710 185654 DEBUG nova.network.neutron [-] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.733 185654 INFO nova.compute.manager [-] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Took 1.93 seconds to deallocate network for instance.
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.782 185654 DEBUG oslo_concurrency.lockutils [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.784 185654 DEBUG oslo_concurrency.lockutils [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.914 185654 DEBUG nova.compute.provider_tree [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.937 185654 DEBUG nova.scheduler.client.report [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:06:08 compute-0 ovn_controller[98048]: 2026-01-27T23:06:08Z|00102|binding|INFO|Releasing lport babee362-409a-4d1f-bc47-c6a6dce734ff from this chassis (sb_readonly=0)
Jan 27 23:06:08 compute-0 ovn_controller[98048]: 2026-01-27T23:06:08Z|00103|binding|INFO|Releasing lport 41776a65-3925-474f-a135-3e28059d7e34 from this chassis (sb_readonly=0)
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.963 185654 DEBUG oslo_concurrency.lockutils [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.978 185654 DEBUG nova.compute.manager [req-d86fcd8a-10e4-4284-9e53-86f364c8029a req-2cea7689-cee5-4950-a055-13bf1a7bfe8f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Received event network-vif-plugged-063f8734-c708-4ac4-90bf-5a2100f150c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.979 185654 DEBUG oslo_concurrency.lockutils [req-d86fcd8a-10e4-4284-9e53-86f364c8029a req-2cea7689-cee5-4950-a055-13bf1a7bfe8f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.979 185654 DEBUG oslo_concurrency.lockutils [req-d86fcd8a-10e4-4284-9e53-86f364c8029a req-2cea7689-cee5-4950-a055-13bf1a7bfe8f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.979 185654 DEBUG oslo_concurrency.lockutils [req-d86fcd8a-10e4-4284-9e53-86f364c8029a req-2cea7689-cee5-4950-a055-13bf1a7bfe8f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.980 185654 DEBUG nova.compute.manager [req-d86fcd8a-10e4-4284-9e53-86f364c8029a req-2cea7689-cee5-4950-a055-13bf1a7bfe8f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] No waiting events found dispatching network-vif-plugged-063f8734-c708-4ac4-90bf-5a2100f150c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.980 185654 WARNING nova.compute.manager [req-d86fcd8a-10e4-4284-9e53-86f364c8029a req-2cea7689-cee5-4950-a055-13bf1a7bfe8f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Received unexpected event network-vif-plugged-063f8734-c708-4ac4-90bf-5a2100f150c8 for instance with vm_state deleted and task_state None.
Jan 27 23:06:08 compute-0 nova_compute[185650]: 2026-01-27 23:06:08.980 185654 DEBUG nova.compute.manager [req-d86fcd8a-10e4-4284-9e53-86f364c8029a req-2cea7689-cee5-4950-a055-13bf1a7bfe8f b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Received event network-vif-deleted-063f8734-c708-4ac4-90bf-5a2100f150c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:09 compute-0 nova_compute[185650]: 2026-01-27 23:06:09.009 185654 INFO nova.scheduler.client.report [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Deleted allocations for instance 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1
Jan 27 23:06:09 compute-0 nova_compute[185650]: 2026-01-27 23:06:09.026 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:09 compute-0 nova_compute[185650]: 2026-01-27 23:06:09.086 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:06:09 compute-0 nova_compute[185650]: 2026-01-27 23:06:09.087 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:06:09 compute-0 nova_compute[185650]: 2026-01-27 23:06:09.087 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 23:06:09 compute-0 nova_compute[185650]: 2026-01-27 23:06:09.087 185654 DEBUG nova.objects.instance [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 66eb7f87-9511-4da7-8733-ef0673cfab67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:06:09 compute-0 nova_compute[185650]: 2026-01-27 23:06:09.090 185654 DEBUG oslo_concurrency.lockutils [None req-bb06a9c5-f772-45b4-9996-fa3ddbddf96a b661812adddc45d4beba73ca32253b11 99d030bedd674ca8aef409ccc5f31fd2 - - default default] Lock "92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:10 compute-0 nova_compute[185650]: 2026-01-27 23:06:10.586 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:11 compute-0 nova_compute[185650]: 2026-01-27 23:06:11.018 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Updating instance_info_cache with network_info: [{"id": "64b86a6b-6de4-4fee-917e-229794042e8e", "address": "fa:16:3e:23:60:c6", "network": {"id": "6d0f9d9e-8cd6-4a68-8926-de88e69f60d4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1504245290-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "270690dca2514a49843b866111c87d39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b86a6b-6d", "ovs_interfaceid": "64b86a6b-6de4-4fee-917e-229794042e8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:06:11 compute-0 nova_compute[185650]: 2026-01-27 23:06:11.043 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:06:11 compute-0 nova_compute[185650]: 2026-01-27 23:06:11.043 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 23:06:11 compute-0 nova_compute[185650]: 2026-01-27 23:06:11.044 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:06:11 compute-0 nova_compute[185650]: 2026-01-27 23:06:11.044 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:06:11 compute-0 nova_compute[185650]: 2026-01-27 23:06:11.045 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:06:11 compute-0 nova_compute[185650]: 2026-01-27 23:06:11.045 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:06:11 compute-0 nova_compute[185650]: 2026-01-27 23:06:11.045 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:06:11 compute-0 nova_compute[185650]: 2026-01-27 23:06:11.046 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:06:11 compute-0 nova_compute[185650]: 2026-01-27 23:06:11.046 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 23:06:11 compute-0 nova_compute[185650]: 2026-01-27 23:06:11.736 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:13 compute-0 nova_compute[185650]: 2026-01-27 23:06:13.585 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:14 compute-0 podman[249341]: 2026-01-27 23:06:14.772370673 +0000 UTC m=+0.079847978 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 23:06:15 compute-0 nova_compute[185650]: 2026-01-27 23:06:15.586 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:16 compute-0 nova_compute[185650]: 2026-01-27 23:06:16.563 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:16 compute-0 nova_compute[185650]: 2026-01-27 23:06:16.739 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:17 compute-0 nova_compute[185650]: 2026-01-27 23:06:17.145 185654 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769555162.1438608, a5213d25-e31d-4018-991a-ffcc9a3cf495 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:06:17 compute-0 nova_compute[185650]: 2026-01-27 23:06:17.146 185654 INFO nova.compute.manager [-] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] VM Stopped (Lifecycle Event)
Jan 27 23:06:17 compute-0 nova_compute[185650]: 2026-01-27 23:06:17.172 185654 DEBUG nova.compute.manager [None req-4ef0dd29-7a79-404e-9f91-978ab69cfe84 - - - - - -] [instance: a5213d25-e31d-4018-991a-ffcc9a3cf495] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:06:17 compute-0 podman[249365]: 2026-01-27 23:06:17.411427652 +0000 UTC m=+0.112477336 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 27 23:06:18 compute-0 ovn_controller[98048]: 2026-01-27T23:06:18Z|00104|binding|INFO|Releasing lport babee362-409a-4d1f-bc47-c6a6dce734ff from this chassis (sb_readonly=0)
Jan 27 23:06:18 compute-0 ovn_controller[98048]: 2026-01-27T23:06:18Z|00105|binding|INFO|Releasing lport 41776a65-3925-474f-a135-3e28059d7e34 from this chassis (sb_readonly=0)
Jan 27 23:06:19 compute-0 nova_compute[185650]: 2026-01-27 23:06:19.058 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:20 compute-0 nova_compute[185650]: 2026-01-27 23:06:20.410 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:06:20 compute-0 nova_compute[185650]: 2026-01-27 23:06:20.589 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:21 compute-0 nova_compute[185650]: 2026-01-27 23:06:21.697 185654 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769555166.6962829, 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:06:21 compute-0 nova_compute[185650]: 2026-01-27 23:06:21.698 185654 INFO nova.compute.manager [-] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] VM Stopped (Lifecycle Event)
Jan 27 23:06:21 compute-0 nova_compute[185650]: 2026-01-27 23:06:21.721 185654 DEBUG nova.compute.manager [None req-c0da1d86-1606-48ef-ba94-e4163e278410 - - - - - -] [instance: 92d2b3b7-cfcb-43dc-bfbe-a80c8d0f1fe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:06:21 compute-0 nova_compute[185650]: 2026-01-27 23:06:21.742 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:23 compute-0 nova_compute[185650]: 2026-01-27 23:06:23.535 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:25 compute-0 nova_compute[185650]: 2026-01-27 23:06:25.448 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:25 compute-0 nova_compute[185650]: 2026-01-27 23:06:25.591 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:26 compute-0 podman[249397]: 2026-01-27 23:06:26.371402569 +0000 UTC m=+0.072402779 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 27 23:06:26 compute-0 podman[249398]: 2026-01-27 23:06:26.396636471 +0000 UTC m=+0.091591279 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4)
Jan 27 23:06:26 compute-0 nova_compute[185650]: 2026-01-27 23:06:26.746 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:27 compute-0 ovn_controller[98048]: 2026-01-27T23:06:27Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:60:c6 10.100.0.8
Jan 27 23:06:27 compute-0 ovn_controller[98048]: 2026-01-27T23:06:27Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:60:c6 10.100.0.8
Jan 27 23:06:29 compute-0 nova_compute[185650]: 2026-01-27 23:06:29.641 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:29 compute-0 podman[201529]: time="2026-01-27T23:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:06:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 27 23:06:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4846 "" "Go-http-client/1.1"
Jan 27 23:06:30 compute-0 nova_compute[185650]: 2026-01-27 23:06:30.595 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:31 compute-0 nova_compute[185650]: 2026-01-27 23:06:31.206 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:31 compute-0 openstack_network_exporter[204648]: ERROR   23:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:06:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:06:31 compute-0 openstack_network_exporter[204648]: ERROR   23:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:06:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:06:31 compute-0 nova_compute[185650]: 2026-01-27 23:06:31.748 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:32 compute-0 podman[249450]: 2026-01-27 23:06:32.388106566 +0000 UTC m=+0.087292595 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 23:06:32 compute-0 ovn_controller[98048]: 2026-01-27T23:06:32Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:28:a4 10.100.0.11
Jan 27 23:06:32 compute-0 ovn_controller[98048]: 2026-01-27T23:06:32Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:28:a4 10.100.0.11
Jan 27 23:06:35 compute-0 nova_compute[185650]: 2026-01-27 23:06:35.149 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:35 compute-0 nova_compute[185650]: 2026-01-27 23:06:35.598 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:36 compute-0 podman[249472]: 2026-01-27 23:06:36.423896332 +0000 UTC m=+0.104905765 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 27 23:06:36 compute-0 nova_compute[185650]: 2026-01-27 23:06:36.753 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:37 compute-0 podman[249492]: 2026-01-27 23:06:37.388212612 +0000 UTC m=+0.076882849 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, architecture=x86_64, release=1214.1726694543, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vendor=Red Hat, Inc., config_id=kepler, distribution-scope=public, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=)
Jan 27 23:06:37 compute-0 podman[249493]: 2026-01-27 23:06:37.452539235 +0000 UTC m=+0.140375429 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 27 23:06:39 compute-0 nova_compute[185650]: 2026-01-27 23:06:39.667 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:40 compute-0 nova_compute[185650]: 2026-01-27 23:06:40.382 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:40 compute-0 nova_compute[185650]: 2026-01-27 23:06:40.601 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:41 compute-0 nova_compute[185650]: 2026-01-27 23:06:41.756 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:41 compute-0 ovn_controller[98048]: 2026-01-27T23:06:41Z|00106|memory|INFO|peak resident set size grew 50% in last 2343.4 seconds, from 16000 kB to 24024 kB
Jan 27 23:06:41 compute-0 ovn_controller[98048]: 2026-01-27T23:06:41Z|00107|memory|INFO|idl-cells-OVN_Southbound:11234 idl-cells-Open_vSwitch:813 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:387 lflow-cache-entries-cache-matches:299 lflow-cache-size-KB:1611 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:663 ofctrl_installed_flow_usage-KB:483 ofctrl_sb_flow_ref_usage-KB:251
Jan 27 23:06:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:42.220 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:41:f8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '26:ae:8e:b8:80:28'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:06:42 compute-0 nova_compute[185650]: 2026-01-27 23:06:42.221 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:42 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:42.222 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 27 23:06:45 compute-0 podman[249534]: 2026-01-27 23:06:45.384316421 +0000 UTC m=+0.082450316 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 23:06:45 compute-0 nova_compute[185650]: 2026-01-27 23:06:45.603 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:46 compute-0 nova_compute[185650]: 2026-01-27 23:06:46.760 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:47 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:06:47.225 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e88f80e1-ee63-4bdc-95c3-ad473efb7428, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:06:48 compute-0 podman[249558]: 2026-01-27 23:06:48.381810884 +0000 UTC m=+0.079775925 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, config_id=openstack_network_exporter)
Jan 27 23:06:48 compute-0 ovn_controller[98048]: 2026-01-27T23:06:48Z|00108|binding|INFO|Releasing lport babee362-409a-4d1f-bc47-c6a6dce734ff from this chassis (sb_readonly=0)
Jan 27 23:06:48 compute-0 ovn_controller[98048]: 2026-01-27T23:06:48Z|00109|binding|INFO|Releasing lport 41776a65-3925-474f-a135-3e28059d7e34 from this chassis (sb_readonly=0)
Jan 27 23:06:48 compute-0 nova_compute[185650]: 2026-01-27 23:06:48.686 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:49 compute-0 nova_compute[185650]: 2026-01-27 23:06:49.067 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Acquiring lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:49 compute-0 nova_compute[185650]: 2026-01-27 23:06:49.067 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:49 compute-0 nova_compute[185650]: 2026-01-27 23:06:49.085 185654 DEBUG nova.compute.manager [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 23:06:49 compute-0 nova_compute[185650]: 2026-01-27 23:06:49.374 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:49 compute-0 nova_compute[185650]: 2026-01-27 23:06:49.696 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:49 compute-0 nova_compute[185650]: 2026-01-27 23:06:49.697 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:49 compute-0 nova_compute[185650]: 2026-01-27 23:06:49.826 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 23:06:49 compute-0 nova_compute[185650]: 2026-01-27 23:06:49.826 185654 INFO nova.compute.claims [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Claim successful on node compute-0.ctlplane.example.com
Jan 27 23:06:50 compute-0 nova_compute[185650]: 2026-01-27 23:06:50.605 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:50 compute-0 nova_compute[185650]: 2026-01-27 23:06:50.632 185654 DEBUG nova.compute.provider_tree [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:06:50 compute-0 nova_compute[185650]: 2026-01-27 23:06:50.726 185654 DEBUG nova.scheduler.client.report [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:06:50 compute-0 nova_compute[185650]: 2026-01-27 23:06:50.901 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:50 compute-0 nova_compute[185650]: 2026-01-27 23:06:50.902 185654 DEBUG nova.compute.manager [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.139 185654 DEBUG nova.compute.manager [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.141 185654 DEBUG nova.network.neutron [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.162 185654 INFO nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.186 185654 DEBUG nova.compute.manager [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.324 185654 DEBUG nova.compute.manager [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.327 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.328 185654 INFO nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Creating image(s)
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.330 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Acquiring lock "/var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.330 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "/var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.332 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "/var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.362 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.445 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.446 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Acquiring lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.447 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.457 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.481 185654 DEBUG nova.policy [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea2353d747c04d31940685f5b6330baa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96e79f52da2341129f0c6e2459dae69d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.545 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.546 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.583 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.585 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.585 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.651 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.653 185654 DEBUG nova.virt.disk.api [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Checking if we can resize image /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.653 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.721 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.725 185654 DEBUG nova.virt.disk.api [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Cannot resize image /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.726 185654 DEBUG nova.objects.instance [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lazy-loading 'migration_context' on Instance uuid 6e4e7f3d-60d3-49cf-b7be-e93194c45a44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.764 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.830 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.831 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Ensure instance console log exists: /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.833 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.833 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:06:51 compute-0 nova_compute[185650]: 2026-01-27 23:06:51.834 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:06:53 compute-0 nova_compute[185650]: 2026-01-27 23:06:53.329 185654 DEBUG nova.network.neutron [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Successfully created port: 2621603a-6426-42bf-8eb4-1b772c4b8ec7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 27 23:06:54 compute-0 nova_compute[185650]: 2026-01-27 23:06:54.603 185654 DEBUG nova.network.neutron [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Successfully updated port: 2621603a-6426-42bf-8eb4-1b772c4b8ec7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 27 23:06:55 compute-0 nova_compute[185650]: 2026-01-27 23:06:55.027 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Acquiring lock "refresh_cache-6e4e7f3d-60d3-49cf-b7be-e93194c45a44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:06:55 compute-0 nova_compute[185650]: 2026-01-27 23:06:55.027 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Acquired lock "refresh_cache-6e4e7f3d-60d3-49cf-b7be-e93194c45a44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:06:55 compute-0 nova_compute[185650]: 2026-01-27 23:06:55.028 185654 DEBUG nova.network.neutron [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 23:06:55 compute-0 nova_compute[185650]: 2026-01-27 23:06:55.607 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:56 compute-0 nova_compute[185650]: 2026-01-27 23:06:56.342 185654 DEBUG nova.network.neutron [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 27 23:06:56 compute-0 nova_compute[185650]: 2026-01-27 23:06:56.768 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:06:57 compute-0 podman[249596]: 2026-01-27 23:06:57.381054149 +0000 UTC m=+0.079368159 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.build-date=20260126, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 27 23:06:57 compute-0 podman[249595]: 2026-01-27 23:06:57.395379702 +0000 UTC m=+0.095379957 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 27 23:06:57 compute-0 nova_compute[185650]: 2026-01-27 23:06:57.724 185654 DEBUG nova.compute.manager [req-1c29b457-5b23-48ce-9bb5-067b7d467fd6 req-41c0dc9c-41c2-4020-a7e1-fa7143fd8177 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Received event network-changed-2621603a-6426-42bf-8eb4-1b772c4b8ec7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:06:57 compute-0 nova_compute[185650]: 2026-01-27 23:06:57.725 185654 DEBUG nova.compute.manager [req-1c29b457-5b23-48ce-9bb5-067b7d467fd6 req-41c0dc9c-41c2-4020-a7e1-fa7143fd8177 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Refreshing instance network info cache due to event network-changed-2621603a-6426-42bf-8eb4-1b772c4b8ec7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 23:06:57 compute-0 nova_compute[185650]: 2026-01-27 23:06:57.727 185654 DEBUG oslo_concurrency.lockutils [req-1c29b457-5b23-48ce-9bb5-067b7d467fd6 req-41c0dc9c-41c2-4020-a7e1-fa7143fd8177 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-6e4e7f3d-60d3-49cf-b7be-e93194c45a44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:06:59 compute-0 nova_compute[185650]: 2026-01-27 23:06:59.186 185654 DEBUG nova.network.neutron [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Updating instance_info_cache with network_info: [{"id": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "address": "fa:16:3e:fc:26:ef", "network": {"id": "858b1061-c06a-46f7-bd0e-407aa8dea432", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1392185424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e79f52da2341129f0c6e2459dae69d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2621603a-64", "ovs_interfaceid": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.023 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Releasing lock "refresh_cache-6e4e7f3d-60d3-49cf-b7be-e93194c45a44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.024 185654 DEBUG nova.compute.manager [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Instance network_info: |[{"id": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "address": "fa:16:3e:fc:26:ef", "network": {"id": "858b1061-c06a-46f7-bd0e-407aa8dea432", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1392185424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e79f52da2341129f0c6e2459dae69d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2621603a-64", "ovs_interfaceid": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.024 185654 DEBUG oslo_concurrency.lockutils [req-1c29b457-5b23-48ce-9bb5-067b7d467fd6 req-41c0dc9c-41c2-4020-a7e1-fa7143fd8177 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-6e4e7f3d-60d3-49cf-b7be-e93194c45a44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.024 185654 DEBUG nova.network.neutron [req-1c29b457-5b23-48ce-9bb5-067b7d467fd6 req-41c0dc9c-41c2-4020-a7e1-fa7143fd8177 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Refreshing network info cache for port 2621603a-6426-42bf-8eb4-1b772c4b8ec7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.027 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Start _get_guest_xml network_info=[{"id": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "address": "fa:16:3e:fc:26:ef", "network": {"id": "858b1061-c06a-46f7-bd0e-407aa8dea432", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1392185424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e79f52da2341129f0c6e2459dae69d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2621603a-64", "ovs_interfaceid": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T23:04:44Z,direct_url=<?>,disk_format='qcow2',id=319632d9-1bdd-4de0-b1d2-0507a3e91b6b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T23:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_format': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '319632d9-1bdd-4de0-b1d2-0507a3e91b6b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.033 185654 WARNING nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.203 185654 DEBUG nova.virt.libvirt.host [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.204 185654 DEBUG nova.virt.libvirt.host [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.209 185654 DEBUG nova.virt.libvirt.host [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.210 185654 DEBUG nova.virt.libvirt.host [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.211 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.212 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-27T23:04:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d732a0b9-79cd-4ff7-8741-11ae188a8b69',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-27T23:04:44Z,direct_url=<?>,disk_format='qcow2',id=319632d9-1bdd-4de0-b1d2-0507a3e91b6b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8318d5a200d74e4386cf4972db015b75',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-27T23:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.213 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.214 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.215 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.215 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.216 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.217 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.217 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.218 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.219 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.220 185654 DEBUG nova.virt.hardware [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.226 185654 DEBUG nova.virt.libvirt.vif [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T23:06:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1823238454',display_name='tempest-ServersTestManualDisk-server-1823238454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1823238454',id=10,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIAhoOXFRYlR3dQeYWnU3bPPp2QaLmbJOs0xIGPPjECXd1s4RIirz8l3ShSbGCktwMvo2kKXuJt0Qo9etz6G/ObRY7P/5Fej+Abfm4LRnuAbLTBhs9ANVuifQfpd47M3g==',key_name='tempest-keypair-675614874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96e79f52da2341129f0c6e2459dae69d',ramdisk_id='',reservation_id='r-085sq4on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-19452803',owner_user_name='tempest-ServersTestManualDisk-19452803-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T23:06:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea2353d747c04d31940685f5b6330baa',uuid=6e4e7f3d-60d3-49cf-b7be-e93194c45a44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "address": "fa:16:3e:fc:26:ef", "network": {"id": "858b1061-c06a-46f7-bd0e-407aa8dea432", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1392185424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e79f52da2341129f0c6e2459dae69d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2621603a-64", "ovs_interfaceid": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.227 185654 DEBUG nova.network.os_vif_util [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Converting VIF {"id": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "address": "fa:16:3e:fc:26:ef", "network": {"id": "858b1061-c06a-46f7-bd0e-407aa8dea432", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1392185424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e79f52da2341129f0c6e2459dae69d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2621603a-64", "ovs_interfaceid": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.229 185654 DEBUG nova.network.os_vif_util [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:26:ef,bridge_name='br-int',has_traffic_filtering=True,id=2621603a-6426-42bf-8eb4-1b772c4b8ec7,network=Network(858b1061-c06a-46f7-bd0e-407aa8dea432),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2621603a-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.231 185654 DEBUG nova.objects.instance [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e4e7f3d-60d3-49cf-b7be-e93194c45a44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.246 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] End _get_guest_xml xml=<domain type="kvm">
Jan 27 23:07:00 compute-0 nova_compute[185650]:   <uuid>6e4e7f3d-60d3-49cf-b7be-e93194c45a44</uuid>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   <name>instance-0000000a</name>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   <memory>131072</memory>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   <vcpu>1</vcpu>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   <metadata>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <nova:name>tempest-ServersTestManualDisk-server-1823238454</nova:name>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <nova:creationTime>2026-01-27 23:07:00</nova:creationTime>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <nova:flavor name="m1.nano">
Jan 27 23:07:00 compute-0 nova_compute[185650]:         <nova:memory>128</nova:memory>
Jan 27 23:07:00 compute-0 nova_compute[185650]:         <nova:disk>1</nova:disk>
Jan 27 23:07:00 compute-0 nova_compute[185650]:         <nova:swap>0</nova:swap>
Jan 27 23:07:00 compute-0 nova_compute[185650]:         <nova:ephemeral>0</nova:ephemeral>
Jan 27 23:07:00 compute-0 nova_compute[185650]:         <nova:vcpus>1</nova:vcpus>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       </nova:flavor>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <nova:owner>
Jan 27 23:07:00 compute-0 nova_compute[185650]:         <nova:user uuid="ea2353d747c04d31940685f5b6330baa">tempest-ServersTestManualDisk-19452803-project-member</nova:user>
Jan 27 23:07:00 compute-0 nova_compute[185650]:         <nova:project uuid="96e79f52da2341129f0c6e2459dae69d">tempest-ServersTestManualDisk-19452803</nova:project>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       </nova:owner>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <nova:root type="image" uuid="319632d9-1bdd-4de0-b1d2-0507a3e91b6b"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <nova:ports>
Jan 27 23:07:00 compute-0 nova_compute[185650]:         <nova:port uuid="2621603a-6426-42bf-8eb4-1b772c4b8ec7">
Jan 27 23:07:00 compute-0 nova_compute[185650]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:         </nova:port>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       </nova:ports>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     </nova:instance>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   </metadata>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   <sysinfo type="smbios">
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <system>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <entry name="manufacturer">RDO</entry>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <entry name="product">OpenStack Compute</entry>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <entry name="serial">6e4e7f3d-60d3-49cf-b7be-e93194c45a44</entry>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <entry name="uuid">6e4e7f3d-60d3-49cf-b7be-e93194c45a44</entry>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <entry name="family">Virtual Machine</entry>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     </system>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   </sysinfo>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   <os>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <boot dev="hd"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <smbios mode="sysinfo"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   </os>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   <features>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <acpi/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <apic/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <vmcoreinfo/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   </features>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   <clock offset="utc">
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <timer name="pit" tickpolicy="delay"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <timer name="hpet" present="no"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   </clock>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   <cpu mode="host-model" match="exact">
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <topology sockets="1" cores="1" threads="1"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   </cpu>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   <devices>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <disk type="file" device="disk">
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <target dev="vda" bus="virtio"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     </disk>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <disk type="file" device="cdrom">
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <driver name="qemu" type="raw" cache="none"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <source file="/var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.config"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <target dev="sda" bus="sata"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     </disk>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <interface type="ethernet">
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <mac address="fa:16:3e:fc:26:ef"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <driver name="vhost" rx_queue_size="512"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <mtu size="1442"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <target dev="tap2621603a-64"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     </interface>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <serial type="pty">
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <log file="/var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/console.log" append="off"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     </serial>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <video>
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <model type="virtio"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     </video>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <input type="tablet" bus="usb"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <rng model="virtio">
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <backend model="random">/dev/urandom</backend>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     </rng>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="pci" model="pcie-root-port"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <controller type="usb" index="0"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     <memballoon model="virtio">
Jan 27 23:07:00 compute-0 nova_compute[185650]:       <stats period="10"/>
Jan 27 23:07:00 compute-0 nova_compute[185650]:     </memballoon>
Jan 27 23:07:00 compute-0 nova_compute[185650]:   </devices>
Jan 27 23:07:00 compute-0 nova_compute[185650]: </domain>
Jan 27 23:07:00 compute-0 nova_compute[185650]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.247 185654 DEBUG nova.compute.manager [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Preparing to wait for external event network-vif-plugged-2621603a-6426-42bf-8eb4-1b772c4b8ec7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.247 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Acquiring lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.247 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.248 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.249 185654 DEBUG nova.virt.libvirt.vif [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-27T23:06:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1823238454',display_name='tempest-ServersTestManualDisk-server-1823238454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1823238454',id=10,image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIAhoOXFRYlR3dQeYWnU3bPPp2QaLmbJOs0xIGPPjECXd1s4RIirz8l3ShSbGCktwMvo2kKXuJt0Qo9etz6G/ObRY7P/5Fej+Abfm4LRnuAbLTBhs9ANVuifQfpd47M3g==',key_name='tempest-keypair-675614874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96e79f52da2341129f0c6e2459dae69d',ramdisk_id='',reservation_id='r-085sq4on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='319632d9-1bdd-4de0-b1d2-0507a3e91b6b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-19452803',owner_user_name='tempest-ServersTestManualDisk-19452803-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-27T23:06:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea2353d747c04d31940685f5b6330baa',uuid=6e4e7f3d-60d3-49cf-b7be-e93194c45a44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "address": "fa:16:3e:fc:26:ef", "network": {"id": "858b1061-c06a-46f7-bd0e-407aa8dea432", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1392185424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e79f52da2341129f0c6e2459dae69d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2621603a-64", "ovs_interfaceid": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.249 185654 DEBUG nova.network.os_vif_util [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Converting VIF {"id": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "address": "fa:16:3e:fc:26:ef", "network": {"id": "858b1061-c06a-46f7-bd0e-407aa8dea432", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1392185424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e79f52da2341129f0c6e2459dae69d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2621603a-64", "ovs_interfaceid": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.250 185654 DEBUG nova.network.os_vif_util [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:26:ef,bridge_name='br-int',has_traffic_filtering=True,id=2621603a-6426-42bf-8eb4-1b772c4b8ec7,network=Network(858b1061-c06a-46f7-bd0e-407aa8dea432),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2621603a-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.250 185654 DEBUG os_vif [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:26:ef,bridge_name='br-int',has_traffic_filtering=True,id=2621603a-6426-42bf-8eb4-1b772c4b8ec7,network=Network(858b1061-c06a-46f7-bd0e-407aa8dea432),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2621603a-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.251 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.251 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.251 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.258 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.258 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2621603a-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.258 185654 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2621603a-64, col_values=(('external_ids', {'iface-id': '2621603a-6426-42bf-8eb4-1b772c4b8ec7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:26:ef', 'vm-uuid': '6e4e7f3d-60d3-49cf-b7be-e93194c45a44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.261 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:00 compute-0 NetworkManager[56600]: <info>  [1769555220.2639] manager: (tap2621603a-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.264 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.275 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.277 185654 INFO os_vif [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:26:ef,bridge_name='br-int',has_traffic_filtering=True,id=2621603a-6426-42bf-8eb4-1b772c4b8ec7,network=Network(858b1061-c06a-46f7-bd0e-407aa8dea432),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2621603a-64')
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.361 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.361 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.362 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] No VIF found with MAC fa:16:3e:fc:26:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.362 185654 INFO nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Using config drive
Jan 27 23:07:00 compute-0 nova_compute[185650]: 2026-01-27 23:07:00.610 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:01 compute-0 nova_compute[185650]: 2026-01-27 23:07:01.749 185654 INFO nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Creating config drive at /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.config
Jan 27 23:07:01 compute-0 nova_compute[185650]: 2026-01-27 23:07:01.767 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5zhjq69k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:01 compute-0 podman[201529]: time="2026-01-27T23:07:01Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:07:01 compute-0 podman[201529]: @ - - [27/Jan/2026:23:07:01 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 27 23:07:01 compute-0 podman[201529]: @ - - [27/Jan/2026:23:07:01 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4851 "" "Go-http-client/1.1"
Jan 27 23:07:01 compute-0 openstack_network_exporter[204648]: ERROR   23:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:07:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:07:01 compute-0 openstack_network_exporter[204648]: ERROR   23:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:07:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:07:01 compute-0 nova_compute[185650]: 2026-01-27 23:07:01.921 185654 DEBUG oslo_concurrency.processutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5zhjq69k" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:02 compute-0 kernel: tap2621603a-64: entered promiscuous mode
Jan 27 23:07:02 compute-0 NetworkManager[56600]: <info>  [1769555222.0214] manager: (tap2621603a-64): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Jan 27 23:07:02 compute-0 nova_compute[185650]: 2026-01-27 23:07:02.022 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:02 compute-0 ovn_controller[98048]: 2026-01-27T23:07:02Z|00110|binding|INFO|Claiming lport 2621603a-6426-42bf-8eb4-1b772c4b8ec7 for this chassis.
Jan 27 23:07:02 compute-0 ovn_controller[98048]: 2026-01-27T23:07:02Z|00111|binding|INFO|2621603a-6426-42bf-8eb4-1b772c4b8ec7: Claiming fa:16:3e:fc:26:ef 10.100.0.7
Jan 27 23:07:02 compute-0 ovn_controller[98048]: 2026-01-27T23:07:02Z|00112|binding|INFO|Setting lport 2621603a-6426-42bf-8eb4-1b772c4b8ec7 ovn-installed in OVS
Jan 27 23:07:02 compute-0 nova_compute[185650]: 2026-01-27 23:07:02.044 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:02 compute-0 nova_compute[185650]: 2026-01-27 23:07:02.048 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:02 compute-0 systemd-udevd[249655]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 23:07:02 compute-0 systemd-machined[157036]: New machine qemu-10-instance-0000000a.
Jan 27 23:07:02 compute-0 NetworkManager[56600]: <info>  [1769555222.0896] device (tap2621603a-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 27 23:07:02 compute-0 NetworkManager[56600]: <info>  [1769555222.0911] device (tap2621603a-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 27 23:07:02 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Jan 27 23:07:02 compute-0 ovn_controller[98048]: 2026-01-27T23:07:02Z|00113|binding|INFO|Setting lport 2621603a-6426-42bf-8eb4-1b772c4b8ec7 up in Southbound
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.127 107302 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:26:ef 10.100.0.7'], port_security=['fa:16:3e:fc:26:ef 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6e4e7f3d-60d3-49cf-b7be-e93194c45a44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858b1061-c06a-46f7-bd0e-407aa8dea432', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96e79f52da2341129f0c6e2459dae69d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0c256955-33da-4220-8535-247c4baa8968', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c4e3e91-a53e-4dc5-940e-4f9e65e8c209, chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8d908cb640>], logical_port=2621603a-6426-42bf-8eb4-1b772c4b8ec7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.128 107302 INFO neutron.agent.ovn.metadata.agent [-] Port 2621603a-6426-42bf-8eb4-1b772c4b8ec7 in datapath 858b1061-c06a-46f7-bd0e-407aa8dea432 bound to our chassis
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.130 107302 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 858b1061-c06a-46f7-bd0e-407aa8dea432
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.155 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7e29fc-05bc-465b-be3f-541ca758fc22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.158 107302 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap858b1061-c1 in ovnmeta-858b1061-c06a-46f7-bd0e-407aa8dea432 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.162 238735 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap858b1061-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.162 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[8d90cf65-268a-453e-b2fc-28913fa6869c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.164 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[d6aee2a1-b2e5-4153-8b88-89cf8fee8e0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.186 107797 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd887eb-decf-4e32-ab77-eb1897d16fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.224 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[e432c198-a054-4537-be1a-6a45829f85d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.273 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[4b76e7c8-c91b-4548-b61c-4d926e7b38b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 NetworkManager[56600]: <info>  [1769555222.2879] manager: (tap858b1061-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Jan 27 23:07:02 compute-0 systemd-udevd[249658]: Network interface NamePolicy= disabled on kernel command line.
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.289 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[8d279e03-6869-4b8b-903a-e17971a8b650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.337 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d17dc7-3077-4bf3-a008-460e9cf43542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.341 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef2815a-287f-4a6a-ba9a-2577fc064220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 NetworkManager[56600]: <info>  [1769555222.3702] device (tap858b1061-c0): carrier: link connected
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.381 238756 DEBUG oslo.privsep.daemon [-] privsep: reply[e4273a59-a523-4e59-bd5f-10950aa68efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.404 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[c07339fe-c4ed-474e-803a-740b508a1fa5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858b1061-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:19:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505396, 'reachable_time': 34522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249695, 'error': None, 'target': 'ovnmeta-858b1061-c06a-46f7-bd0e-407aa8dea432', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.427 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[af404e29-040c-4247-993a-3bb7345f632f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feee:194a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505396, 'tstamp': 505396}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249696, 'error': None, 'target': 'ovnmeta-858b1061-c06a-46f7-bd0e-407aa8dea432', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.451 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcb1dca-3f9d-46c0-ba24-c6ae42816038]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858b1061-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:19:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505396, 'reachable_time': 34522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249698, 'error': None, 'target': 'ovnmeta-858b1061-c06a-46f7-bd0e-407aa8dea432', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 nova_compute[185650]: 2026-01-27 23:07:02.479 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555222.4783623, 6e4e7f3d-60d3-49cf-b7be-e93194c45a44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:07:02 compute-0 nova_compute[185650]: 2026-01-27 23:07:02.480 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] VM Started (Lifecycle Event)
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.496 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[13c512f7-bc4c-4770-9c9a-fbef96b427b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.583 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[026eaa80-2e59-4dc9-bba9-6e3641fc6f6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.587 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858b1061-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.587 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.589 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap858b1061-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:07:02 compute-0 kernel: tap858b1061-c0: entered promiscuous mode
Jan 27 23:07:02 compute-0 NetworkManager[56600]: <info>  [1769555222.5927] manager: (tap858b1061-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Jan 27 23:07:02 compute-0 nova_compute[185650]: 2026-01-27 23:07:02.591 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:02 compute-0 nova_compute[185650]: 2026-01-27 23:07:02.596 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.597 107302 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap858b1061-c0, col_values=(('external_ids', {'iface-id': '214f43fe-43bb-4b3b-a8b3-b07b238c6785'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 27 23:07:02 compute-0 nova_compute[185650]: 2026-01-27 23:07:02.599 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:02 compute-0 ovn_controller[98048]: 2026-01-27T23:07:02Z|00114|binding|INFO|Releasing lport 214f43fe-43bb-4b3b-a8b3-b07b238c6785 from this chassis (sb_readonly=0)
Jan 27 23:07:02 compute-0 nova_compute[185650]: 2026-01-27 23:07:02.612 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.615 107302 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/858b1061-c06a-46f7-bd0e-407aa8dea432.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/858b1061-c06a-46f7-bd0e-407aa8dea432.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.616 238735 DEBUG oslo.privsep.daemon [-] privsep: reply[48fb5b84-bc3d-45f2-bc38-5d5458812656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.617 107302 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: global
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     log         /dev/log local0 debug
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     log-tag     haproxy-metadata-proxy-858b1061-c06a-46f7-bd0e-407aa8dea432
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     user        root
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     group       root
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     maxconn     1024
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     pidfile     /var/lib/neutron/external/pids/858b1061-c06a-46f7-bd0e-407aa8dea432.pid.haproxy
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     daemon
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: defaults
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     log global
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     mode http
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     option httplog
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     option dontlognull
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     option http-server-close
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     option forwardfor
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     retries                 3
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     timeout http-request    30s
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     timeout connect         30s
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     timeout client          32s
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     timeout server          32s
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     timeout http-keep-alive 30s
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: listen listener
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     bind 169.254.169.254:80
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     server metadata /var/lib/neutron/metadata_proxy
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:     http-request add-header X-OVN-Network-ID 858b1061-c06a-46f7-bd0e-407aa8dea432
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 27 23:07:02 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:02.618 107302 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-858b1061-c06a-46f7-bd0e-407aa8dea432', 'env', 'PROCESS_TAG=haproxy-858b1061-c06a-46f7-bd0e-407aa8dea432', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/858b1061-c06a-46f7-bd0e-407aa8dea432.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 27 23:07:03 compute-0 podman[249727]: 2026-01-27 23:07:03.1232999 +0000 UTC m=+0.097857213 container create 8981add292003a306296ef6b6c3c32ac466db296b9758fb5a9002030de5807d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858b1061-c06a-46f7-bd0e-407aa8dea432, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 23:07:03 compute-0 podman[249727]: 2026-01-27 23:07:03.066145504 +0000 UTC m=+0.040702837 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 27 23:07:03 compute-0 systemd[1]: Started libpod-conmon-8981add292003a306296ef6b6c3c32ac466db296b9758fb5a9002030de5807d5.scope.
Jan 27 23:07:03 compute-0 systemd[1]: Started libcrun container.
Jan 27 23:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49eea32cb86ae126bd6223edb81737f2453d35a83f38bae48948a07035ed988/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 27 23:07:03 compute-0 nova_compute[185650]: 2026-01-27 23:07:03.230 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:07:03 compute-0 podman[249727]: 2026-01-27 23:07:03.235970327 +0000 UTC m=+0.210527670 container init 8981add292003a306296ef6b6c3c32ac466db296b9758fb5a9002030de5807d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858b1061-c06a-46f7-bd0e-407aa8dea432, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 27 23:07:03 compute-0 nova_compute[185650]: 2026-01-27 23:07:03.244 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555222.4785018, 6e4e7f3d-60d3-49cf-b7be-e93194c45a44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:07:03 compute-0 nova_compute[185650]: 2026-01-27 23:07:03.244 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] VM Paused (Lifecycle Event)
Jan 27 23:07:03 compute-0 podman[249727]: 2026-01-27 23:07:03.245931402 +0000 UTC m=+0.220488715 container start 8981add292003a306296ef6b6c3c32ac466db296b9758fb5a9002030de5807d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858b1061-c06a-46f7-bd0e-407aa8dea432, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 27 23:07:03 compute-0 podman[249741]: 2026-01-27 23:07:03.252836687 +0000 UTC m=+0.082910834 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 27 23:07:03 compute-0 neutron-haproxy-ovnmeta-858b1061-c06a-46f7-bd0e-407aa8dea432[249749]: [NOTICE]   (249771) : New worker (249773) forked
Jan 27 23:07:03 compute-0 neutron-haproxy-ovnmeta-858b1061-c06a-46f7-bd0e-407aa8dea432[249749]: [NOTICE]   (249771) : Loading success.
Jan 27 23:07:03 compute-0 nova_compute[185650]: 2026-01-27 23:07:03.342 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:07:03 compute-0 nova_compute[185650]: 2026-01-27 23:07:03.349 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 23:07:03 compute-0 nova_compute[185650]: 2026-01-27 23:07:03.369 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 23:07:03 compute-0 nova_compute[185650]: 2026-01-27 23:07:03.992 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:07:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:04.164 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:04.166 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:07:04.167 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:04 compute-0 nova_compute[185650]: 2026-01-27 23:07:04.465 185654 DEBUG nova.network.neutron [req-1c29b457-5b23-48ce-9bb5-067b7d467fd6 req-41c0dc9c-41c2-4020-a7e1-fa7143fd8177 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Updated VIF entry in instance network info cache for port 2621603a-6426-42bf-8eb4-1b772c4b8ec7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 23:07:04 compute-0 nova_compute[185650]: 2026-01-27 23:07:04.466 185654 DEBUG nova.network.neutron [req-1c29b457-5b23-48ce-9bb5-067b7d467fd6 req-41c0dc9c-41c2-4020-a7e1-fa7143fd8177 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Updating instance_info_cache with network_info: [{"id": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "address": "fa:16:3e:fc:26:ef", "network": {"id": "858b1061-c06a-46f7-bd0e-407aa8dea432", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1392185424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e79f52da2341129f0c6e2459dae69d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2621603a-64", "ovs_interfaceid": "2621603a-6426-42bf-8eb4-1b772c4b8ec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:07:04 compute-0 nova_compute[185650]: 2026-01-27 23:07:04.814 185654 DEBUG oslo_concurrency.lockutils [req-1c29b457-5b23-48ce-9bb5-067b7d467fd6 req-41c0dc9c-41c2-4020-a7e1-fa7143fd8177 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-6e4e7f3d-60d3-49cf-b7be-e93194c45a44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.015 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.015 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.016 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.016 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.030 185654 DEBUG nova.compute.manager [req-b0bea7a0-35d0-435a-bf18-3e0a74a5a975 req-cfbb11da-ce3e-449b-b610-60bf0d398d70 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Received event network-vif-plugged-2621603a-6426-42bf-8eb4-1b772c4b8ec7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.031 185654 DEBUG oslo_concurrency.lockutils [req-b0bea7a0-35d0-435a-bf18-3e0a74a5a975 req-cfbb11da-ce3e-449b-b610-60bf0d398d70 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.032 185654 DEBUG oslo_concurrency.lockutils [req-b0bea7a0-35d0-435a-bf18-3e0a74a5a975 req-cfbb11da-ce3e-449b-b610-60bf0d398d70 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.032 185654 DEBUG oslo_concurrency.lockutils [req-b0bea7a0-35d0-435a-bf18-3e0a74a5a975 req-cfbb11da-ce3e-449b-b610-60bf0d398d70 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.034 185654 DEBUG nova.compute.manager [req-b0bea7a0-35d0-435a-bf18-3e0a74a5a975 req-cfbb11da-ce3e-449b-b610-60bf0d398d70 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Processing event network-vif-plugged-2621603a-6426-42bf-8eb4-1b772c4b8ec7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.036 185654 DEBUG nova.compute.manager [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.042 185654 DEBUG nova.virt.driver [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] Emitting event <LifecycleEvent: 1769555225.0416656, 6e4e7f3d-60d3-49cf-b7be-e93194c45a44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.042 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] VM Resumed (Lifecycle Event)
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.045 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.052 185654 INFO nova.virt.libvirt.driver [-] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Instance spawned successfully.
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.053 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.061 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.067 185654 DEBUG nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.090 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.091 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.092 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.092 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.093 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.093 185654 DEBUG nova.virt.libvirt.driver [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.106 185654 INFO nova.compute.manager [None req-4c3ab5e9-fd5f-4a23-854c-be623c6cc322 - - - - - -] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.182 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.247 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.248 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.269 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.340 185654 INFO nova.compute.manager [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Took 14.01 seconds to spawn the instance on the hypervisor.
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.341 185654 DEBUG nova.compute.manager [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.346 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.369 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.440 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.442 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.518 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.529 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.595 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.597 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.621 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:05 compute-0 nova_compute[185650]: 2026-01-27 23:07:05.672 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:06 compute-0 nova_compute[185650]: 2026-01-27 23:07:06.142 185654 INFO nova.compute.manager [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Took 16.67 seconds to build instance.
Jan 27 23:07:06 compute-0 nova_compute[185650]: 2026-01-27 23:07:06.205 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:07:06 compute-0 nova_compute[185650]: 2026-01-27 23:07:06.207 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4964MB free_disk=72.3165283203125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 23:07:06 compute-0 nova_compute[185650]: 2026-01-27 23:07:06.208 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:06 compute-0 nova_compute[185650]: 2026-01-27 23:07:06.209 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:06 compute-0 nova_compute[185650]: 2026-01-27 23:07:06.424 185654 DEBUG oslo_concurrency.lockutils [None req-ecca1760-743a-4eb2-a72c-b1133b51309f ea2353d747c04d31940685f5b6330baa 96e79f52da2341129f0c6e2459dae69d - - default default] Lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:07 compute-0 podman[249801]: 2026-01-27 23:07:07.389114057 +0000 UTC m=+0.087989779 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 27 23:07:07 compute-0 podman[249821]: 2026-01-27 23:07:07.53687392 +0000 UTC m=+0.109235606 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, name=ubi9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 23:07:07 compute-0 podman[249841]: 2026-01-27 23:07:07.652803314 +0000 UTC m=+0.115585035 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 27 23:07:10 compute-0 nova_compute[185650]: 2026-01-27 23:07:10.278 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:10 compute-0 nova_compute[185650]: 2026-01-27 23:07:10.615 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:11 compute-0 nova_compute[185650]: 2026-01-27 23:07:11.206 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 66eb7f87-9511-4da7-8733-ef0673cfab67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:07:11 compute-0 nova_compute[185650]: 2026-01-27 23:07:11.207 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 9033d5a6-ab60-43e3-bbcb-3a8b83161c58 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:07:11 compute-0 nova_compute[185650]: 2026-01-27 23:07:11.207 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 6e4e7f3d-60d3-49cf-b7be-e93194c45a44 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:07:11 compute-0 nova_compute[185650]: 2026-01-27 23:07:11.207 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 23:07:11 compute-0 nova_compute[185650]: 2026-01-27 23:07:11.207 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 23:07:11 compute-0 sshd-session[249866]: Connection closed by authenticating user root 45.148.10.121 port 36332 [preauth]
Jan 27 23:07:11 compute-0 nova_compute[185650]: 2026-01-27 23:07:11.969 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:07:12 compute-0 nova_compute[185650]: 2026-01-27 23:07:12.724 185654 DEBUG nova.compute.manager [req-4dce162e-3968-4ae2-a3b0-976c7b4a640d req-7cede108-dadd-4134-acb8-8c92da75748d b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Received event network-vif-plugged-2621603a-6426-42bf-8eb4-1b772c4b8ec7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:07:12 compute-0 nova_compute[185650]: 2026-01-27 23:07:12.724 185654 DEBUG oslo_concurrency.lockutils [req-4dce162e-3968-4ae2-a3b0-976c7b4a640d req-7cede108-dadd-4134-acb8-8c92da75748d b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:12 compute-0 nova_compute[185650]: 2026-01-27 23:07:12.725 185654 DEBUG oslo_concurrency.lockutils [req-4dce162e-3968-4ae2-a3b0-976c7b4a640d req-7cede108-dadd-4134-acb8-8c92da75748d b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:12 compute-0 nova_compute[185650]: 2026-01-27 23:07:12.725 185654 DEBUG oslo_concurrency.lockutils [req-4dce162e-3968-4ae2-a3b0-976c7b4a640d req-7cede108-dadd-4134-acb8-8c92da75748d b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Lock "6e4e7f3d-60d3-49cf-b7be-e93194c45a44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:12 compute-0 nova_compute[185650]: 2026-01-27 23:07:12.725 185654 DEBUG nova.compute.manager [req-4dce162e-3968-4ae2-a3b0-976c7b4a640d req-7cede108-dadd-4134-acb8-8c92da75748d b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] No waiting events found dispatching network-vif-plugged-2621603a-6426-42bf-8eb4-1b772c4b8ec7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 27 23:07:12 compute-0 nova_compute[185650]: 2026-01-27 23:07:12.725 185654 WARNING nova.compute.manager [req-4dce162e-3968-4ae2-a3b0-976c7b4a640d req-7cede108-dadd-4134-acb8-8c92da75748d b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 6e4e7f3d-60d3-49cf-b7be-e93194c45a44] Received unexpected event network-vif-plugged-2621603a-6426-42bf-8eb4-1b772c4b8ec7 for instance with vm_state active and task_state None.
Jan 27 23:07:12 compute-0 nova_compute[185650]: 2026-01-27 23:07:12.736 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:07:13 compute-0 nova_compute[185650]: 2026-01-27 23:07:13.040 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 23:07:13 compute-0 nova_compute[185650]: 2026-01-27 23:07:13.041 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.030 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Acquiring lock "dd37badf-e0f2-4ba3-b12e-f4238236f28d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.031 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "dd37badf-e0f2-4ba3-b12e-f4238236f28d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.067 185654 DEBUG nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.228 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.229 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.427 185654 DEBUG nova.objects.instance [None req-d961fe3b-8891-44b9-b149-ad73d5181319 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Lazy-loading 'flavor' on Instance uuid 9033d5a6-ab60-43e3-bbcb-3a8b83161c58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.475 185654 DEBUG oslo_concurrency.lockutils [None req-d961fe3b-8891-44b9-b149-ad73d5181319 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Acquiring lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.475 185654 DEBUG oslo_concurrency.lockutils [None req-d961fe3b-8891-44b9-b149-ad73d5181319 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Acquired lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.639 185654 DEBUG nova.virt.hardware [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.639 185654 INFO nova.compute.claims [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Claim successful on node compute-0.ctlplane.example.com
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.898 185654 DEBUG nova.compute.provider_tree [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.959 185654 DEBUG nova.scheduler.client.report [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.987 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:14 compute-0 nova_compute[185650]: 2026-01-27 23:07:14.987 185654 DEBUG nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.121 185654 DEBUG nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.121 185654 DEBUG nova.network.neutron [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.144 185654 INFO nova.virt.libvirt.driver [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.166 185654 DEBUG nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.292 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.354 185654 DEBUG nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.355 185654 DEBUG nova.virt.libvirt.driver [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.356 185654 INFO nova.virt.libvirt.driver [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Creating image(s)
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.356 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Acquiring lock "/var/lib/nova/instances/dd37badf-e0f2-4ba3-b12e-f4238236f28d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.357 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "/var/lib/nova/instances/dd37badf-e0f2-4ba3-b12e-f4238236f28d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.357 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "/var/lib/nova/instances/dd37badf-e0f2-4ba3-b12e-f4238236f28d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.372 185654 DEBUG oslo_concurrency.processutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.434 185654 DEBUG oslo_concurrency.processutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.435 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Acquiring lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.436 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.449 185654 DEBUG oslo_concurrency.processutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.506 185654 DEBUG oslo_concurrency.processutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.507 185654 DEBUG oslo_concurrency.processutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/dd37badf-e0f2-4ba3-b12e-f4238236f28d/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.543 185654 DEBUG oslo_concurrency.processutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c,backing_fmt=raw /var/lib/nova/instances/dd37badf-e0f2-4ba3-b12e-f4238236f28d/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.543 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "1e4e814900a1ccc0cddf32336f7d631bc193ea2c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.544 185654 DEBUG oslo_concurrency.processutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.599 185654 DEBUG oslo_concurrency.processutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e4e814900a1ccc0cddf32336f7d631bc193ea2c --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.600 185654 DEBUG nova.virt.disk.api [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Checking if we can resize image /var/lib/nova/instances/dd37badf-e0f2-4ba3-b12e-f4238236f28d/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.601 185654 DEBUG oslo_concurrency.processutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd37badf-e0f2-4ba3-b12e-f4238236f28d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.622 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.678 185654 DEBUG oslo_concurrency.processutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd37badf-e0f2-4ba3-b12e-f4238236f28d/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.679 185654 DEBUG nova.virt.disk.api [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Cannot resize image /var/lib/nova/instances/dd37badf-e0f2-4ba3-b12e-f4238236f28d/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.680 185654 DEBUG nova.objects.instance [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lazy-loading 'migration_context' on Instance uuid dd37badf-e0f2-4ba3-b12e-f4238236f28d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.720 185654 DEBUG nova.virt.libvirt.driver [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.720 185654 DEBUG nova.virt.libvirt.driver [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Ensure instance console log exists: /var/lib/nova/instances/dd37badf-e0f2-4ba3-b12e-f4238236f28d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.721 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.722 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:15 compute-0 nova_compute[185650]: 2026-01-27 23:07:15.722 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:16 compute-0 podman[249883]: 2026-01-27 23:07:16.399944483 +0000 UTC m=+0.087818265 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 27 23:07:16 compute-0 nova_compute[185650]: 2026-01-27 23:07:16.874 185654 DEBUG nova.policy [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ff1ffdaed4ce4dfcadad3d10f8683070', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '51f3209263824d18abe4e752dc4c06d5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 27 23:07:18 compute-0 nova_compute[185650]: 2026-01-27 23:07:18.044 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:07:18 compute-0 nova_compute[185650]: 2026-01-27 23:07:18.085 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:07:18 compute-0 nova_compute[185650]: 2026-01-27 23:07:18.085 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Error during ComputeManager._heal_instance_info_cache: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')
Jan 27 23:07:19 compute-0 nova_compute[185650]: [SQL: SELECT instance_system_metadata.created_at AS instance_system_metadata_created_at, instance_system_metadata.updated_at AS instance_system_metadata_updated_at, instance_system_metadata.deleted_at AS instance_system_metadata_deleted_at, instance_system_metadata.deleted AS instance_system_metadata_deleted, instance_system_metadata.id AS instance_system_metadata_id, instance_system_metadata.`key` AS instance_system_metadata_key, instance_system_metadata.value AS instance_system_metadata_value, instance_system_metadata.instance_uuid AS instance_system_metadata_instance_uuid, anon_1.instances_uuid AS anon_1_instances_uuid 
Jan 27 23:07:19 compute-0 nova_compute[185650]: FROM (SELECT DISTINCT instances.uuid AS instances_uuid, instances.id AS instances_id 
Jan 27 23:07:19 compute-0 nova_compute[185650]: FROM instances 
Jan 27 23:07:19 compute-0 nova_compute[185650]: WHERE instances.deleted = %(deleted_1)s AND instances.uuid = %(uuid_1)s ORDER BY instances.id 
Jan 27 23:07:19 compute-0 nova_compute[185650]:  LIMIT %(param_1)s) AS anon_1 INNER JOIN instance_system_metadata ON anon_1.instances_uuid = instance_system_metadata.instance_uuid]
Jan 27 23:07:19 compute-0 nova_compute[185650]: [parameters: {'deleted_1': 0, 'uuid_1': '9033d5a6-ab60-43e3-bbcb-3a8b83161c58', 'param_1': 1}]
Jan 27 23:07:19 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:19 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 525, in get_by_uuid\n    db_inst = cls._db_instance_get_by_uuid(context, uuid, columns_to_join,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 517, in _db_instance_get_by_uuid\n    return db.instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1395, in instance_get_by_uuid\n    return _instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1400, in _instance_get_by_uuid\n    result = _build_instance_get(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/result.py", line 1498, in first\n    return self._only_one_row(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/result.py", line 559, in _only_one_row\n    row = onerow(hard_close=True)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/result.py", line 1386, in _fetchone_impl\n    return self._real_result._fetchone_impl(hard_close=hard_close)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/result.py", line 1801, in _fetchone_impl\n    row = next(self.iterator, _NO_ROW)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/loading.py", line 151, in chunks\n    rows = [proc(row) for row in fetch]\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/loading.py", line 151, in <listcomp>\n    rows = [proc(row) for row in fetch]\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/loading.py", line 962, in _instance\n    _populate_full(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/loading.py", line 1136, in _populate_full\n    populator(state, dict_, row)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/strategies.py", line 1870, in load_collection_from_subq\n    collection = collections.get(tuple_getter(row), ())\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/strategies.py", line 1600, in get\n    self._load()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/strategies.py", line 1616, in _load\n    rows = list(q.params(self.params))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2901, in __iter__\n    result = self._iter()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1714, in execute\n    result = conn._execute_20(statement, params or {}, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_20\n    return meth(self, args_10style, kwargs_10style, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT instance_system_metadata.created_at AS instance_system_metadata_created_at, instance_system_metadata.updated_at AS instance_system_metadata_updated_at, instance_system_metadata.deleted_at AS instance_system_metadata_deleted_at, instance_system_metadata.deleted AS instance_system_metadata_deleted, instance_system_metadata.id AS instance_system_metadata_id, instance_system_metadata.`key` AS instance_system_metadata_key, instance_system_metadata.value AS instance_system_metadata_value, instance_system_metadata.instance_uuid AS instance_system_metadata_instance_uuid, anon_1.instances_uuid AS anon_1_instances_uuid \nFROM (SELECT DISTINCT instances.uuid AS instances_uuid, instances.id AS instances_id \nFROM instances \nWHERE instances.deleted = %(deleted_1)s AND instances.uuid = %(uuid_1)s ORDER BY instances.id \n LIMIT %(param_1)s) AS anon_1 INNER JOIN instance_system_metadata ON anon_1.instances_uuid = instance_system_metadata.instance_uuid]\n[parameters: {'deleted_1': 0, 'uuid_1': '9033d5a6-ab60-43e3-bbcb-3a8b83161c58', 'param_1': 1}]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n"].
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task     task(self, context)
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9891, in _heal_instance_info_cache
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task     inst = objects.Instance.get_by_uuid(
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task     raise result
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task [SQL: SELECT instance_system_metadata.created_at AS instance_system_metadata_created_at, instance_system_metadata.updated_at AS instance_system_metadata_updated_at, instance_system_metadata.deleted_at AS instance_system_metadata_deleted_at, instance_system_metadata.deleted AS instance_system_metadata_deleted, instance_system_metadata.id AS instance_system_metadata_id, instance_system_metadata.`key` AS instance_system_metadata_key, instance_system_metadata.value AS instance_system_metadata_value, instance_system_metadata.instance_uuid AS instance_system_metadata_instance_uuid, anon_1.instances_uuid AS anon_1_instances_uuid 
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task FROM (SELECT DISTINCT instances.uuid AS instances_uuid, instances.id AS instances_id 
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task FROM instances 
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task WHERE instances.deleted = %(deleted_1)s AND instances.uuid = %(uuid_1)s ORDER BY instances.id 
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task  LIMIT %(param_1)s) AS anon_1 INNER JOIN instance_system_metadata ON anon_1.instances_uuid = instance_system_metadata.instance_uuid]
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task [parameters: {'deleted_1': 0, 'uuid_1': '9033d5a6-ab60-43e3-bbcb-3a8b83161c58', 'param_1': 1}]
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 525, in get_by_uuid\n    db_inst = cls._db_instance_get_by_uuid(context, uuid, columns_to_join,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 517, in _db_instance_get_by_uuid\n    return db.instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1395, in instance_get_by_uuid\n    return _instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1400, in _instance_get_by_uuid\n    result = _build_instance_get(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/result.py", line 1498, in first\n    return self._only_one_row(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/result.py", line 559, in _only_one_row\n    row = onerow(hard_close=True)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/result.py", line 1386, in _fetchone_impl\n    return self._real_result._fetchone_impl(hard_close=hard_close)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/result.py", line 1801, in _fetchone_impl\n    row = next(self.iterator, _NO_ROW)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/loading.py", line 151, in chunks\n    rows = [proc(row) for row in fetch]\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/loading.py", line 151, in <listcomp>\n    rows = [proc(row) for row in fetch]\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/loading.py", line 962, in _instance\n    _populate_full(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/loading.py", line 1136, in _populate_full\n    populator(state, dict_, row)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/strategies.py", line 1870, in load_collection_from_subq\n    collection = collections.get(tuple_getter(row), ())\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/strategies.py", line 1600, in get\n    self._load()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/strategies.py", line 1616, in _load\n    rows = list(q.params(self.params))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2901, in __iter__\n    result = self._iter()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1714, in execute\n    result = conn._execute_20(statement, params or {}, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_20\n    return meth(self, args_10style, kwargs_10style, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT instance_system_metadata.created_at AS instance_system_metadata_created_at, instance_system_metadata.updated_at AS instance_system_metadata_updated_at, instance_system_metadata.deleted_at AS instance_system_metadata_deleted_at, instance_system_metadata.deleted AS instance_system_metadata_deleted, instance_system_metadata.id AS instance_system_metadata_id, instance_system_metadata.`key` AS instance_system_metadata_key, instance_system_metadata.value AS instance_system_metadata_value, instance_system_metadata.instance_uuid AS instance_system_metadata_instance_uuid, anon_1.instances_uuid AS anon_1_instances_uuid \nFROM (SELECT DISTINCT instances.uuid AS instances_uuid, instances.id AS instances_id \nFROM instances \nWHERE instances.deleted = %(deleted_1)s AND instances.uuid = %(uuid_1)s ORDER BY instances.id \n LIMIT %(param_1)s) AS anon_1 INNER JOIN instance_system_metadata ON anon_1.instances_uuid = instance_system_metadata.instance_uuid]\n[parameters: {'deleted_1': 0, 'uuid_1': '9033d5a6-ab60-43e3-bbcb-3a8b83161c58', 'param_1': 1}]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n"].
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task 
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.213 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.214 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.214 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.215 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.216 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.216 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.217 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 23:07:19 compute-0 podman[249905]: 2026-01-27 23:07:19.424917646 +0000 UTC m=+0.101589763 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Jan 27 23:07:19 compute-0 rsyslogd[235951]: message too long (8245) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-pack [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:19 compute-0 rsyslogd[235951]: message too long (8309) with configured size 8096, begin of message is: 2026-01-27 23:07:19.204 185654 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:19 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:19 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db     raise result
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:19 compute-0 nova_compute[185650]: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db 
Jan 27 23:07:20 compute-0 rsyslogd[235951]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:20 compute-0 rsyslogd[235951]: message too long (9052) with configured size 8096, begin of message is: 2026-01-27 23:07:19.526 185654 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.166 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.231 185654 DEBUG neutronclient.v2_0.client [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Error message: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Instance failed network setup after 1 attempt(s): neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 23:07:20 compute-0 nova_compute[185650]: The Keystone service is temporarily unavailable.
Jan 27 23:07:20 compute-0 nova_compute[185650]: 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 
Jan 27 23:07:20 compute-0 nova_compute[185650]: Neutron server returns request_ids: ['req-7349e9a7-ce70-4c34-beea-fef4691f1e14']
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager Traceback (most recent call last):
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 1960, in _allocate_network_async
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     nwinfo = self.network_api.allocate_for_instance(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 1193, in allocate_for_instance
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     security_group_ids = self._process_security_groups(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 851, in _process_security_groups
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     user_security_groups = neutron.list_security_groups(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 1063, in list_security_groups
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     return self.list('security_groups', self.security_groups_path,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 372, in list
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     for r in self._pagination(collection, path, **params):
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 387, in _pagination
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     res = self.get(path, params=params)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 356, in get
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     return self.retry_request("GET", action, body=body,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 333, in retry_request
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     return self.do_request(method, action, body=body,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 297, in do_request
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     self._handle_fault_response(status_code, replybody, resp)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 272, in _handle_fault_response
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     exception_handler_v20(status_code, error_body)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 90, in exception_handler_v20
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager     raise client_exc(message=error_message,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager The Keystone service is temporarily unavailable.
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager Neutron server returns request_ids: ['req-7349e9a7-ce70-4c34-beea-fef4691f1e14']
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.233 185654 ERROR nova.compute.manager 
Jan 27 23:07:20 compute-0 nova_compute[185650]: Traceback (most recent call last):
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/eventlet/hubs/poll.py", line 111, in wait
Jan 27 23:07:20 compute-0 nova_compute[185650]:     listener.cb(fileno)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 221, in main
Jan 27 23:07:20 compute-0 nova_compute[185650]:     result = function(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/utils.py", line 654, in context_wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]:     return func(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 1982, in _allocate_network_async
Jan 27 23:07:20 compute-0 nova_compute[185650]:     raise e
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 1960, in _allocate_network_async
Jan 27 23:07:20 compute-0 nova_compute[185650]:     nwinfo = self.network_api.allocate_for_instance(
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 1193, in allocate_for_instance
Jan 27 23:07:20 compute-0 nova_compute[185650]:     security_group_ids = self._process_security_groups(
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 851, in _process_security_groups
Jan 27 23:07:20 compute-0 nova_compute[185650]:     user_security_groups = neutron.list_security_groups(
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 1063, in list_security_groups
Jan 27 23:07:20 compute-0 nova_compute[185650]:     return self.list('security_groups', self.security_groups_path,
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 372, in list
Jan 27 23:07:20 compute-0 nova_compute[185650]:     for r in self._pagination(collection, path, **params):
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 387, in _pagination
Jan 27 23:07:20 compute-0 nova_compute[185650]:     res = self.get(path, params=params)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 356, in get
Jan 27 23:07:20 compute-0 nova_compute[185650]:     return self.retry_request("GET", action, body=body,
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 333, in retry_request
Jan 27 23:07:20 compute-0 nova_compute[185650]:     return self.do_request(method, action, body=body,
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 297, in do_request
Jan 27 23:07:20 compute-0 nova_compute[185650]:     self._handle_fault_response(status_code, replybody, resp)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 272, in _handle_fault_response
Jan 27 23:07:20 compute-0 nova_compute[185650]:     exception_handler_v20(status_code, error_body)
Jan 27 23:07:20 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 90, in exception_handler_v20
Jan 27 23:07:20 compute-0 nova_compute[185650]:     raise client_exc(message=error_message,
Jan 27 23:07:20 compute-0 nova_compute[185650]: neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 23:07:20 compute-0 nova_compute[185650]: The Keystone service is temporarily unavailable.
Jan 27 23:07:20 compute-0 nova_compute[185650]: 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 
Jan 27 23:07:20 compute-0 nova_compute[185650]: Neutron server returns request_ids: ['req-7349e9a7-ce70-4c34-beea-fef4691f1e14']
Jan 27 23:07:20 compute-0 nova_compute[185650]: Removing descriptor: 31
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Instance failed to spawn: neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 23:07:20 compute-0 nova_compute[185650]: The Keystone service is temporarily unavailable.
Jan 27 23:07:20 compute-0 nova_compute[185650]: 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 
Jan 27 23:07:20 compute-0 nova_compute[185650]: Neutron server returns request_ids: ['req-7349e9a7-ce70-4c34-beea-fef4691f1e14']
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Traceback (most recent call last):
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2864, in _build_resources
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     yield resources
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2611, in _build_and_run_instance
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self.driver.spawn(context, instance, image_meta,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4407, in spawn
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     xml = self._get_guest_xml(context, instance, network_info,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7538, in _get_guest_xml
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     network_info_str = str(network_info)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 620, in __str__
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self._sync_wrapper(fn, *args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 603, in _sync_wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self.wait()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 635, in wait
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self[:] = self._gt.wait()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 181, in wait
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self._exit_event.wait()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     result = hub.switch()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self.greenlet.switch()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 221, in main
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     result = function(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/utils.py", line 654, in context_wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return func(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 1982, in _allocate_network_async
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     raise e
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 1960, in _allocate_network_async
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     nwinfo = self.network_api.allocate_for_instance(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 1193, in allocate_for_instance
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     security_group_ids = self._process_security_groups(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 851, in _process_security_groups
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     user_security_groups = neutron.list_security_groups(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 1063, in list_security_groups
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self.list('security_groups', self.security_groups_path,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 372, in list
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     for r in self._pagination(collection, path, **params):
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 387, in _pagination
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     res = self.get(path, params=params)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 356, in get
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self.retry_request("GET", action, body=body,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 333, in retry_request
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self.do_request(method, action, body=body,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 297, in do_request
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self._handle_fault_response(status_code, replybody, resp)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 272, in _handle_fault_response
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     exception_handler_v20(status_code, error_body)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 90, in exception_handler_v20
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     raise client_exc(message=error_message,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] The Keystone service is temporarily unavailable.
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Neutron server returns request_ids: ['req-7349e9a7-ce70-4c34-beea-fef4691f1e14']
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.244 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.250 185654 INFO nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Terminating instance
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.252 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Acquiring lock "refresh_cache-dd37badf-e0f2-4ba3-b12e-f4238236f28d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.252 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Acquired lock "refresh_cache-dd37badf-e0f2-4ba3-b12e-f4238236f28d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.253 185654 DEBUG nova.network.neutron [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.295 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.307 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Releasing lock "refresh_cache-dd37badf-e0f2-4ba3-b12e-f4238236f28d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.309 185654 WARNING nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Could not clean up failed build, not rescheduling. Error: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:20 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:20 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_info_cache.py", line 107, in refresh\n    current = self.__class__.get_by_instance_uuid(self._context,\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_info_cache.py", line 67, in get_by_instance_uuid\n    db_obj = db.instance_info_cache_get(context, instance_uuid)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2473, in instance_info_cache_get\n    return model_query(context, models.InstanceInfoCache).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.311 185654 DEBUG nova.compute.claims [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Aborting claim: <nova.compute.claims.Claim object at 0x7f6984228430> abort /usr/lib/python3.9/site-packages/nova/compute/claims.py:85
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.313 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.314 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.373 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Failed to build and run instance: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:20 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:20 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 809, in save\n    db.instance_extra_update_by_uuid(context, self.uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2547, in instance_extra_update_by_uuid\n    rows_updated = model_query(context, models.InstanceExtra).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3306, in update\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Traceback (most recent call last):
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2864, in _build_resources
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     yield resources
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2611, in _build_and_run_instance
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self.driver.spawn(context, instance, image_meta,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4407, in spawn
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     xml = self._get_guest_xml(context, instance, network_info,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7538, in _get_guest_xml
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     network_info_str = str(network_info)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 620, in __str__
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self._sync_wrapper(fn, *args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 603, in _sync_wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self.wait()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 635, in wait
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self[:] = self._gt.wait()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 181, in wait
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self._exit_event.wait()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     result = hub.switch()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self.greenlet.switch()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 221, in main
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     result = function(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/utils.py", line 654, in context_wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return func(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 1982, in _allocate_network_async
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     raise e
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 1960, in _allocate_network_async
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     nwinfo = self.network_api.allocate_for_instance(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 1193, in allocate_for_instance
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     security_group_ids = self._process_security_groups(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 851, in _process_security_groups
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     user_security_groups = neutron.list_security_groups(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 1063, in list_security_groups
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self.list('security_groups', self.security_groups_path,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 372, in list
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     for r in self._pagination(collection, path, **params):
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 387, in _pagination
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     res = self.get(path, params=params)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 356, in get
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self.retry_request("GET", action, body=body,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 333, in retry_request
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self.do_request(method, action, body=body,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 297, in do_request
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self._handle_fault_response(status_code, replybody, resp)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     ret = obj(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 272, in _handle_fault_response
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     exception_handler_v20(status_code, error_body)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 90, in exception_handler_v20
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     raise client_exc(message=error_message,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] The Keystone service is temporarily unavailable.
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Neutron server returns request_ids: ['req-7349e9a7-ce70-4c34-beea-fef4691f1e14']
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] During handling of the above exception, another exception occurred:
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Traceback (most recent call last):
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2881, in _build_resources
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self._shutdown_instance(context, instance,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 3109, in _shutdown_instance
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     network_info = self.network_api.get_instance_nw_info(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 1990, in get_instance_nw_info
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     result = self._get_instance_nw_info(context, instance, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 2015, in _get_instance_nw_info
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     compute_utils.refresh_info_cache_for_instance(context, instance)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1008, in refresh_info_cache_for_instance
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     instance.info_cache.refresh()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     updates, result = self.indirection_api.object_action(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     result = self.transport._send(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self._driver.send(target, ctxt, message,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     raise result
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_info_cache.py", line 107, in refresh\n    current = self.__class__.get_by_instance_uuid(self._context,\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_info_cache.py", line 67, in get_by_instance_uuid\n    db_obj = db.instance_info_cache_get(context, instance_uuid)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2473, in instance_info_cache_get\n    return model_query(context, models.InstanceInfoCache).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] During handling of the above exception, another exception occurred:
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Traceback (most recent call last):
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2616, in _build_and_run_instance
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     LOG.info('Took %0.2f seconds to spawn the instance on '
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self.gen.throw(typ, value, traceback)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2889, in _build_resources
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     raise exception.BuildAbortException(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] nova.exception.BuildAbortException: Build of instance dd37badf-e0f2-4ba3-b12e-f4238236f28d aborted: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] The Keystone service is temporarily unavailable.
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Neutron server returns request_ids: ['req-7349e9a7-ce70-4c34-beea-fef4691f1e14']
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] During handling of the above exception, another exception occurred:
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Traceback (most recent call last):
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2616, in _build_and_run_instance
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     LOG.info('Took %0.2f seconds to spawn the instance on '
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/claims.py", line 43, in __exit__
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self.abort()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/claims.py", line 86, in abort
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self.tracker.abort_instance_claim(self.context, self.instance_ref,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py", line 414, in inner
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return f(*args, **kwargs)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 548, in abort_instance_claim
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     self._unset_instance_host_and_node(instance)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 539, in _unset_instance_host_and_node
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     instance.save()
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     updates, result = self.indirection_api.object_action(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     result = self.transport._send(
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self._driver.send(target, ctxt, message,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d]     raise result
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 809, in save\n    db.instance_extra_update_by_uuid(context, self.uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2547, in instance_extra_update_by_uuid\n    rows_updated = model_query(context, models.InstanceExtra).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3306, in update\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] 
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.384 185654 DEBUG nova.compute.utils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] RemoteError notify_about_instance_usage /usr/lib/python3.9/site-packages/nova/compute/utils.py:430
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.385 185654 DEBUG nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Build of instance dd37badf-e0f2-4ba3-b12e-f4238236f28d was re-scheduled: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:20 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:20 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 809, in save\n    db.instance_extra_update_by_uuid(context, self.uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2547, in instance_extra_update_by_uuid\n    rows_updated = model_query(context, models.InstanceExtra).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3306, in update\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n']. _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2450
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.387 185654 DEBUG nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Unplugging VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:2976
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.387 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Acquiring lock "refresh_cache-dd37badf-e0f2-4ba3-b12e-f4238236f28d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.388 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Acquired lock "refresh_cache-dd37badf-e0f2-4ba3-b12e-f4238236f28d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.388 185654 DEBUG nova.network.neutron [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 23:07:20 compute-0 rsyslogd[235951]: message too long (8152) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:20 compute-0 rsyslogd[235951]: message too long (8851) with configured size 8096, begin of message is: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:20 compute-0 rsyslogd[235951]: message too long (8259) with configured size 8096, begin of message is: 2026-01-27 23:07:20.374 185654 ERROR nova.compute.manager [instance: dd37badf-e0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.457 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Releasing lock "refresh_cache-dd37badf-e0f2-4ba3-b12e-f4238236f28d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.458 185654 WARNING nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Failed to update network info cache when cleaning up allocated networks. Stale VIFs may be left on this host.Error: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:20 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:20 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_info_cache.py", line 107, in refresh\n    current = self.__class__.get_by_instance_uuid(self._context,\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_info_cache.py", line 67, in get_by_instance_uuid\n    db_obj = db.instance_info_cache_get(context, instance_uuid)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2473, in instance_info_cache_get\n    return model_query(context, models.InstanceInfoCache).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.622 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.733 185654 ERROR root [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Original exception being dropped: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2864, in _build_resources\n    yield resources\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2611, in _build_and_run_instance\n    self.driver.spawn(context, instance, image_meta,\n', '  File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4407, in spawn\n    xml = self._get_guest_xml(context, instance, network_info,\n', '  File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7538, in _get_guest_xml\n    network_info_str = str(network_info)\n', '  File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 620, in __str__\n    return self._sync_wrapper(fn, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 603, in _sync_wrapper\n    self.wait()\n', '  File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 635, in wait\n    self[:] = self._gt.wait()\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 181, in wait\n    return self._exit_event.wait()\n', '  File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait\n    result = hub.switch()\n', '  File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch\n    return self.greenlet.switch()\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 221, in main\n    result = function(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/utils.py", line 654, in context_wrapper\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 1982, in _allocate_network_async\n    raise e\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 1960, in _allocate_network_async\n    nwinfo = self.network_api.allocate_for_instance(\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 1193, in allocate_for_instance\n    security_group_ids = self._process_security_groups(\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 851, in _process_security_groups\n    user_security_groups = neutron.list_security_groups(\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 1063, in list_security_groups\n    return self.list(\'security_groups\', self.security_groups_path,\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 372, in list\n    for r in self._pagination(collection, path, **params):\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 387, in _pagination\n    res = self.get(path, params=params)\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 356, in get\n    return self.retry_request("GET", action, body=body,\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 333, in retry_request\n    return self.do_request(method, action, body=body,\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 297, in do_request\n    self._handle_fault_response(status_code, replybody, resp)\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper\n    ret = obj(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 272, in _handle_fault_response\n    exception_handler_v20(status_code, error_body)\n', '  File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 90, in exception_handler_v20\n    raise client_exc(message=error_message,\n', "neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n\nNeutron server returns request_ids: ['req-7349e9a7-ce70-4c34-beea-fef4691f1e14']\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2881, in _build_resources\n    self._shutdown_instance(context, instance,\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 3109, in _shutdown_instance\n    network_info = self.network_api.get_instance_nw_info(\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 1990, in get_instance_nw_info\n    result = self._get_instance_nw_info(context, instance, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 2015, in _get_instance_nw_info\n    compute_utils.refresh_info_cache_for_instance(context, instance)\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1008, in refresh_info_cache_for_instance\n    instance.info_cache.refresh()\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper\n    updates, result = self.indirection_api.object_action(\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action\n    return cctxt.call(context, \'object_action\', objinst=objinst,\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call\n    result = self.transport._send(\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send\n    return self._driver.send(target, ctxt, message,\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send\n    return self._send(target, ctxt, message, wait_for_reply, timeout,\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send\n    raise result\n', 'oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n[\'Traceback (most recent call last):\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\\n    sock = socket.create_connection(\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\\n    raise err\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\\n    sock.connect(sa)\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\\n    socket_checkerr(fd)\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\\n    raise socket.error(err, errno.errorcode[err])\\n\', \'ConnectionRefusedError: [Errno 111] ECONNREFUSED\\n\', \'\\nDuring handling of the above exception, another exception occurred:\\n\\n\', \'Traceback (most recent call last):\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\\n    return fn()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\\n    return _ConnectionFairy._checkout(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\\n    fairy = _ConnectionRecord.checkout(pool)\\n\', \'  File "
Jan 27 23:07:20 compute-0 nova_compute[185650]: /usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\\n    rec._checkin_failed(err, _fairy_was_created=False)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\\n    dbapi_connection = rec.get_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\\n    self.__connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\\n    pool.logger.debug("Error on connect(): %s", e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\\n    self.dbapi_connection = connection = pool._invoke_creator(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\\n    return dialect.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\\n    return self.dbapi.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\\n    return Connection(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\\n    self.connect()\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\\n    raise exc\\n\', \'pymysql.err.OperationalError: (2003, "Can\\\'t connect to MySQL server on \\\'openstack-cell1.openstack.svc\\\' ([Errno 111] ECONNREFUSED)")\\n\', \'\\nThe above exception was the direct cause of the following exception:\\n\\n\', \'Traceback (most recent call last):\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\\n    return getattr(target, method)(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\\n    return fn(self, *args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/objects/instance_info_cache.py", line 107, in refresh\\n    current = self.__class__.get_by_instance_uuid(self._context,\\n\', \'  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\\n    result = fn(cls, context, *args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/objects/instance_info_cache.py", line 67, in get_by_instance_uuid\\n    db_obj = db.instance_info_cache_get(context, instance_uuid)\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\\n    return f(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\\n    return f(context, *args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2473, in instance_info_cache_get\\n    return model_query(context, models.InstanceInfoCache).\\\\\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\\n    return self.limit(1)._iter().first()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\\n    result = self.session.execute(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\\n    conn = self._connection_for_bind(bind)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\\n    return self._transaction._connection_for_bind(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\\n    conn = bind.connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\\n    return self._connection_cls(self, close_with_result=close_with_result)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\\n    else engine.raw_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\\n    return self._wrap_pool_connect(self.pool.connect, _connection)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\\n    Connection._handle_dbapi_exception_noconnection(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\\n    return fn()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\\n    return _ConnectionFairy._checkout(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\\n    fairy = _ConnectionRecord.checkout(pool)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\\n    rec._checkin_failed(err, _fairy_was_created=False)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\\n    dbapi_connection = rec.get_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\\n    self.__connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\\n    pool.logger.debug("Error on connect(): %s", e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\\n    self.dbapi_connection = connection = pool._invoke_creator(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\\n    return dialect.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\\n    return self.dbapi.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\\n    return Connection(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\\n    self.connect()\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\\n    raise exc\\n\', \'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\\\'t connect to MySQL server on \\\'openstack-cell1.openstack.svc\\\' ([Errno 111] ECONNREFUSED)")\\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\\n\'].\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2616, in _build_and_run_instance\n    LOG.info(\'Took %0.2f seconds to spawn the instance on \'\n', '  File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__\n    self.gen.throw(typ, value, traceback)\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2889, in _b
Jan 27 23:07:20 compute-0 nova_compute[185650]: uild_resources\n    raise exception.BuildAbortException(\n', "nova.exception.BuildAbortException: Build of instance dd37badf-e0f2-4ba3-b12e-f4238236f28d aborted: The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n\nNeutron server returns request_ids: ['req-7349e9a7-ce70-4c34-beea-fef4691f1e14']\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2616, in _build_and_run_instance\n    LOG.info(\'Took %0.2f seconds to spawn the instance on \'\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/claims.py", line 43, in __exit__\n    self.abort()\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/claims.py", line 86, in abort\n    self.tracker.abort_instance_claim(self.context, self.instance_ref,\n', '  File "/usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py", line 414, in inner\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 548, in abort_instance_claim\n    self._unset_instance_host_and_node(instance)\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 539, in _unset_instance_host_and_node\n    instance.save()\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper\n    updates, result = self.indirection_api.object_action(\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action\n    return cctxt.call(context, \'object_action\', objinst=objinst,\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call\n    result = self.transport._send(\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send\n    return self._driver.send(target, ctxt, message,\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send\n    return self._send(target, ctxt, message, wait_for_reply, timeout,\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send\n    raise result\n', 'oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n[\'Traceback (most recent call last):\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\\n    sock = socket.create_connection(\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\\n    raise err\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\\n    sock.connect(sa)\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\\n    socket_checkerr(fd)\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\\n    raise socket.error(err, errno.errorcode[err])\\n\', \'ConnectionRefusedError: [Errno 111] ECONNREFUSED\\n\', \'\\nDuring handling of the above exception, another exception occurred:\\n\\n\', \'Traceback (most recent call last):\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\\n    return fn()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\\n    return _ConnectionFairy._checkout(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\\n    fairy = _ConnectionRecord.checkout(pool)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\\n    rec._checkin_failed(err, _fairy_was_created=False)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\\n    dbapi_connection = rec.get_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\\n    self.__connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\\n    pool.logger.debug("Error on connect(): %s", e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\\n    self.dbapi_connection = connection = pool._invoke_creator(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\\n    return dialect.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\\n    return self.dbapi.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\\n    return Connection(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\\n    self.connect()\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\\n    raise exc\\n\', \'pymysql.err.OperationalError: (2003, "Can\\\'t connect to MySQL server on \\\'openstack-cell1.openstack.svc\\\' ([Errno 111] ECONNREFUSED)")\\n\', \'\\nThe above exception was the direct cause of the following exception:\\n\\n\', \'Traceback (most recent call last):\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\\n    return getattr(target, method)(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\\n    return fn(self, *args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 809, in save\\n    db.instance_extra_update_by_uuid(context, self.uuid,\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\\n    return f(context, *args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2547, in instance_extra_update_by_uuid\\n    rows_updated = model_query(context, models.InstanceExtra).\\\\\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3306, in update\\n    result = self.session.execute(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\\n    conn = self._connection_for_bind(bind)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\\n    return self._transaction._connection_for_bind(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\\n    conn = bind.connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\\n    return self._connection_cls(self, close_with_result=close_with_result)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\\n    else engine.raw_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\\n    return self._wrap_pool_connect(self.pool.connect, _connection)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\\n    Connection._handle_dbapi_exception_noconnection(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnect
Jan 27 23:07:20 compute-0 nova_compute[185650]: ion\\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\\n    return fn()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\\n    return _ConnectionFairy._checkout(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\\n    fairy = _ConnectionRecord.checkout(pool)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\\n    rec._checkin_failed(err, _fairy_was_created=False)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\\n    dbapi_connection = rec.get_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\\n    self.__connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\\n    pool.logger.debug("Error on connect(): %s", e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\\n    self.dbapi_connection = connection = pool._invoke_creator(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\\n    return dialect.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\\n    return self.dbapi.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\\n    return Connection(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\\n    self.connect()\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\\n    raise exc\\n\', \'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\\\'t connect to MySQL server on \\\'openstack-cell1.openstack.svc\\\' ([Errno 111] ECONNREFUSED)")\\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\\n\'].\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2426, in _do_build_and_run_instance\n    self._build_and_run_instance(context, instance, image,\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2718, in _build_and_run_instance\n    raise exception.RescheduledException(\n', 'nova.exception.RescheduledException: Build of instance dd37badf-e0f2-4ba3-b12e-f4238236f28d was re-scheduled: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n[\'Traceback (most recent call last):\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\\n    sock = socket.create_connection(\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\\n    raise err\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\\n    sock.connect(sa)\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\\n    socket_checkerr(fd)\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\\n    raise socket.error(err, errno.errorcode[err])\\n\', \'ConnectionRefusedError: [Errno 111] ECONNREFUSED\\n\', \'\\nDuring handling of the above exception, another exception occurred:\\n\\n\', \'Traceback (most recent call last):\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\\n    return fn()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\\n    return _ConnectionFairy._checkout(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\\n    fairy = _ConnectionRecord.checkout(pool)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\\n    rec._checkin_failed(err, _fairy_was_created=False)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\\n    dbapi_connection = rec.get_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\\n    self.__connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\\n    pool.logger.debug("Error on connect(): %s", e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\\n    self.dbapi_connection = connection = pool._invoke_creator(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\\n    return dialect.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\\n    return self.dbapi.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\\n    return Connection(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\\n    self.connect()\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\\n    raise exc\\n\', \'pymysql.err.OperationalError: (2003, "Can\\\'t connect to MySQL server on \\\'openstack-cell1.openstack.svc\\\' ([Errno 111] ECONNREFUSED)")\\n\', \'\\nThe above exception was the direct cause of the following exception:\\n\\n\', \'Traceback (most recent call last):\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\\n    return getattr(target, method)(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\\n    return fn(self, *args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 809, in save\\n    db.instance_extra_update_by_uuid(context, self.uuid,\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\\n    return f(context, *args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2547, in instance_extra_update_by_uuid\\n    rows_updated = model_query(context, models.InstanceExtra).\\\\\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3306, in update\\n    result = self.session.execute(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\\n    conn = self._connection_for_bind(bind)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", lin
Jan 27 23:07:20 compute-0 nova_compute[185650]: e 1552, in _connection_for_bind\\n    return self._transaction._connection_for_bind(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\\n    conn = bind.connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\\n    return self._connection_cls(self, close_with_result=close_with_result)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\\n    else engine.raw_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\\n    return self._wrap_pool_connect(self.pool.connect, _connection)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\\n    Connection._handle_dbapi_exception_noconnection(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\\n    return fn()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\\n    return _ConnectionFairy._checkout(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\\n    fairy = _ConnectionRecord.checkout(pool)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\\n    rec._checkin_failed(err, _fairy_was_created=False)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\\n    dbapi_connection = rec.get_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\\n    self.__connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\\n    pool.logger.debug("Error on connect(): %s", e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\\n    self.dbapi_connection = connection = pool._invoke_creator(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\\n    return dialect.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\\n    return self.dbapi.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\\n    return Connection(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\\n    self.connect()\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\\n    raise exc\\n\', \'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\\\'t connect to MySQL server on \\\'openstack-cell1.openstack.svc\\\' ([Errno 111] ECONNREFUSED)")\\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\\n\'].\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function\n    return function(self, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2461, in _do_build_and_run_instance\n    instance.save()\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper\n    updates, result = self.indirection_api.object_action(\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action\n    return cctxt.call(context, \'object_action\', objinst=objinst,\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call\n    result = self.transport._send(\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send\n    return self._driver.send(target, ctxt, message,\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send\n    return self._send(target, ctxt, message, wait_for_reply, timeout,\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send\n    raise result\n', 'oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n[\'Traceback (most recent call last):\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\\n    sock = socket.create_connection(\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\\n    raise err\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\\n    sock.connect(sa)\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\\n    socket_checkerr(fd)\\n\', \'  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\\n    raise socket.error(err, errno.errorcode[err])\\n\', \'ConnectionRefusedError: [Errno 111] ECONNREFUSED\\n\', \'\\nDuring handling of the above exception, another exception occurred:\\n\\n\', \'Traceback (most recent call last):\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\\n    return fn()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\\n    return _ConnectionFairy._checkout(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\\n    fairy = _ConnectionRecord.checkout(pool)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\\n    rec._checkin_failed(err, _fairy_was_created=False)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\\n    dbapi_connection = rec.get_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\\n    self.__connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\\n    pool.logger.debug("Error on connect(): %s", e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\\n    self.dbapi_connection = connection = pool._invoke_creator(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\\n    return dialect.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\\n    return se
Jan 27 23:07:20 compute-0 nova_compute[185650]: lf.dbapi.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\\n    return Connection(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\\n    self.connect()\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\\n    raise exc\\n\', \'pymysql.err.OperationalError: (2003, "Can\\\'t connect to MySQL server on \\\'openstack-cell1.openstack.svc\\\' ([Errno 111] ECONNREFUSED)")\\n\', \'\\nThe above exception was the direct cause of the following exception:\\n\\n\', \'Traceback (most recent call last):\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\\n    return getattr(target, method)(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\\n    return fn(self, *args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 809, in save\\n    db.instance_extra_update_by_uuid(context, self.uuid,\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\\n    return f(context, *args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2547, in instance_extra_update_by_uuid\\n    rows_updated = model_query(context, models.InstanceExtra).\\\\\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3306, in update\\n    result = self.session.execute(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\\n    conn = self._connection_for_bind(bind)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\\n    return self._transaction._connection_for_bind(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\\n    conn = bind.connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\\n    return self._connection_cls(self, close_with_result=close_with_result)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\\n    else engine.raw_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\\n    return self._wrap_pool_connect(self.pool.connect, _connection)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\\n    Connection._handle_dbapi_exception_noconnection(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\\n    return fn()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\\n    return _ConnectionFairy._checkout(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\\n    fairy = _ConnectionRecord.checkout(pool)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\\n    rec._checkin_failed(err, _fairy_was_created=False)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\\n    dbapi_connection = rec.get_connection()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\\n    self.__connect()\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\\n    pool.logger.debug("Error on connect(): %s", e)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\\n    compat.raise_(\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\\n    raise exception\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\\n    self.dbapi_connection = connection = pool._invoke_creator(self)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\\n    return dialect.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\\n    return self.dbapi.connect(*cargs, **cparams)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\\n    return Connection(*args, **kwargs)\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\\n    self.connect()\\n\', \'  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\\n    raise exc\\n\', \'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\\\'t connect to MySQL server on \\\'openstack-cell1.openstack.svc\\\' ([Errno 111] ECONNREFUSED)")\\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\\n\'].\n']: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:20 compute-0 nova_compute[185650]: 2026-01-27 23:07:20.935 185654 WARNING nova.compute.manager [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Failed to revert task state for instance. Error: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:20 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:20 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 809, in save\n    db.instance_extra_update_by_uuid(context, self.uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2547, in instance_extra_update_by_uuid\n    rows_updated = model_query(context, models.InstanceExtra).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3306, in update\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:20 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: 2026-01-27 23:07:20.733 185654 ERROR root [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cc [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:20 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: /usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checko [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:20 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: uild_resources\n    raise exception.BuildAbortException(\n', "nova.exception.Bui [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:20 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: ion\\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\\n\', \'  F [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:20 compute-0 rsyslogd[235951]: message too long (8192) with configured size 8096, begin of message is: e 1552, in _connection_for_bind\\n    return self._transaction._connection_for_b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:22 compute-0 nova_compute[185650]: 2026-01-27 23:07:22.387 185654 WARNING nova.scheduler.client.report [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Unable to delete allocation for instance dd37badf-e0f2-4ba3-b12e-f4238236f28d: (500 {"errors": [{"status": 500, "title": "Internal Server Error", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n (pymysql.err.OperationalError) (2003, \"Can't connect to MySQL server on 'openstack.openstack.svc' ([Errno 111] Connection refused)\") [SQL: SELECT 1] (Background on this error at: https://sqlalche.me/e/14/e3q8)  ", "request_id": "req-97707d8a-4beb-40b6-89cf-fbc1e75b0b24"}]}): oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:22 compute-0 nova_compute[185650]: 2026-01-27 23:07:22.395 185654 DEBUG oslo_concurrency.lockutils [None req-7d66bcd2-ca4c-4ee7-9c4e-c0cca5040d74 ff1ffdaed4ce4dfcadad3d10f8683070 51f3209263824d18abe4e752dc4c06d5 - - default default] Lock "dd37badf-e0f2-4ba3-b12e-f4238236f28d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:07:22 compute-0 nova_compute[185650]: Traceback (most recent call last):
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2864, in _build_resources
Jan 27 23:07:22 compute-0 nova_compute[185650]:     yield resources
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2611, in _build_and_run_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self.driver.spawn(context, instance, image_meta,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4407, in spawn
Jan 27 23:07:22 compute-0 nova_compute[185650]:     xml = self._get_guest_xml(context, instance, network_info,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7538, in _get_guest_xml
Jan 27 23:07:22 compute-0 nova_compute[185650]:     network_info_str = str(network_info)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 620, in __str__
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._sync_wrapper(fn, *args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 603, in _sync_wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self.wait()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/model.py", line 635, in wait
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self[:] = self._gt.wait()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 181, in wait
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._exit_event.wait()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait
Jan 27 23:07:22 compute-0 nova_compute[185650]:     result = hub.switch()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self.greenlet.switch()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 221, in main
Jan 27 23:07:22 compute-0 nova_compute[185650]:     result = function(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/utils.py", line 654, in context_wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return func(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 1982, in _allocate_network_async
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise e
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 1960, in _allocate_network_async
Jan 27 23:07:22 compute-0 nova_compute[185650]:     nwinfo = self.network_api.allocate_for_instance(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 1193, in allocate_for_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     security_group_ids = self._process_security_groups(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 851, in _process_security_groups
Jan 27 23:07:22 compute-0 nova_compute[185650]:     user_security_groups = neutron.list_security_groups(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 1063, in list_security_groups
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self.list('security_groups', self.security_groups_path,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 372, in list
Jan 27 23:07:22 compute-0 nova_compute[185650]:     for r in self._pagination(collection, path, **params):
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 387, in _pagination
Jan 27 23:07:22 compute-0 nova_compute[185650]:     res = self.get(path, params=params)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 356, in get
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self.retry_request("GET", action, body=body,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 333, in retry_request
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self.do_request(method, action, body=body,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 297, in do_request
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self._handle_fault_response(status_code, replybody, resp)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 196, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     ret = obj(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 272, in _handle_fault_response
Jan 27 23:07:22 compute-0 nova_compute[185650]:     exception_handler_v20(status_code, error_body)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py", line 90, in exception_handler_v20
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise client_exc(message=error_message,
Jan 27 23:07:22 compute-0 nova_compute[185650]: neutronclient.common.exceptions.ServiceUnavailable: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 23:07:22 compute-0 nova_compute[185650]: The Keystone service is temporarily unavailable.
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: Neutron server returns request_ids: ['req-7349e9a7-ce70-4c34-beea-fef4691f1e14']
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: During handling of the above exception, another exception occurred:
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: Traceback (most recent call last):
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2881, in _build_resources
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self._shutdown_instance(context, instance,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 3109, in _shutdown_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     network_info = self.network_api.get_instance_nw_info(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 1990, in get_instance_nw_info
Jan 27 23:07:22 compute-0 nova_compute[185650]:     result = self._get_instance_nw_info(context, instance, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 2015, in _get_instance_nw_info
Jan 27 23:07:22 compute-0 nova_compute[185650]:     compute_utils.refresh_info_cache_for_instance(context, instance)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1008, in refresh_info_cache_for_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     instance.info_cache.refresh()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     updates, result = self.indirection_api.object_action(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 23:07:22 compute-0 nova_compute[185650]:     result = self.transport._send(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._driver.send(target, ctxt, message,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise result
Jan 27 23:07:22 compute-0 nova_compute[185650]: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:22 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:22 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_info_cache.py", line 107, in refresh\n    current = self.__class__.get_by_instance_uuid(self._context,\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_info_cache.py", line 67, in get_by_instance_uuid\n    db_obj = db.instance_info_cache_get(context, instance_uuid)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2473, in instance_info_cache_get\n    return model_query(context, models.InstanceInfoCache).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: During handling of the above exception, another exception occurred:
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: Traceback (most recent call last):
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2616, in _build_and_run_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     LOG.info('Took %0.2f seconds to spawn the instance on '
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self.gen.throw(typ, value, traceback)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2889, in _build_resources
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise exception.BuildAbortException(
Jan 27 23:07:22 compute-0 nova_compute[185650]: nova.exception.BuildAbortException: Build of instance dd37badf-e0f2-4ba3-b12e-f4238236f28d aborted: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 27 23:07:22 compute-0 nova_compute[185650]: The Keystone service is temporarily unavailable.
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: Neutron server returns request_ids: ['req-7349e9a7-ce70-4c34-beea-fef4691f1e14']
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: During handling of the above exception, another exception occurred:
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: Traceback (most recent call last):
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2616, in _build_and_run_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     LOG.info('Took %0.2f seconds to spawn the instance on '
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/claims.py", line 43, in __exit__
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self.abort()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/claims.py", line 86, in abort
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self.tracker.abort_instance_claim(self.context, self.instance_ref,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py", line 414, in inner
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return f(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 548, in abort_instance_claim
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self._unset_instance_host_and_node(instance)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 539, in _unset_instance_host_and_node
Jan 27 23:07:22 compute-0 nova_compute[185650]:     instance.save()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     updates, result = self.indirection_api.object_action(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 23:07:22 compute-0 nova_compute[185650]:     result = self.transport._send(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._driver.send(target, ctxt, message,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise result
Jan 27 23:07:22 compute-0 nova_compute[185650]: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:22 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:22 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 809, in save\n    db.instance_extra_update_by_uuid(context, self.uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2547, in instance_extra_update_by_uuid\n    rows_updated = model_query(context, models.InstanceExtra).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3306, in update\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: During handling of the above exception, another exception occurred:
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: Traceback (most recent call last):
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2426, in _do_build_and_run_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self._build_and_run_instance(context, instance, image,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2718, in _build_and_run_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise exception.RescheduledException(
Jan 27 23:07:22 compute-0 nova_compute[185650]: nova.exception.RescheduledException: Build of instance dd37badf-e0f2-4ba3-b12e-f4238236f28d was re-scheduled: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:22 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:22 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 809, in save\n    db.instance_extra_update_by_uuid(context, self.uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2547, in instance_extra_update_by_uuid\n    rows_updated = model_query(context, models.InstanceExtra).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3306, in update\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: During handling of the above exception, another exception occurred:
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: Traceback (most recent call last):
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return function(self, context, *args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2461, in _do_build_and_run_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     instance.save()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     updates, result = self.indirection_api.object_action(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 23:07:22 compute-0 nova_compute[185650]:     result = self.transport._send(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._driver.send(target, ctxt, message,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise result
Jan 27 23:07:22 compute-0 nova_compute[185650]: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:22 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:22 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 809, in save\n    db.instance_extra_update_by_uuid(context, self.uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2547, in instance_extra_update_by_uuid\n    rows_updated = model_query(context, models.InstanceExtra).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3306, in update\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: During handling of the above exception, another exception occurred:
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: Traceback (most recent call last):
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1439, in decorated_function
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return function(self, context, *args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Jan 27 23:07:22 compute-0 nova_compute[185650]:     compute_utils.add_instance_fault_from_exc(context,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 153, in add_instance_fault_from_exc
Jan 27 23:07:22 compute-0 nova_compute[185650]:     fault_obj.create()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     updates, result = self.indirection_api.object_action(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 23:07:22 compute-0 nova_compute[185650]:     result = self.transport._send(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._driver.send(target, ctxt, message,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise result
Jan 27 23:07:22 compute-0 nova_compute[185650]: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:22 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:22 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_fault.py", line 76, in create\n    db_fault = db.instance_fault_create(self._context, values)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3823, in instance_fault_create\n    fault_ref.save(context.session)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/models.py", line 38, in save\n    session.flush()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3444, in flush\n    self._flush(objects)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3584, in _flush\n    transaction.rollback(_capture_exception=True)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3544, in _flush\n    flush_context.execute()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute\n    rec.execute(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute\n    util.preloaded.orm_persistence.save_obj(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 212, in save_obj\n    for (\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 373, in _organize_states_for_save\n    for state, dict_, mapper, connection in _connections_for_states(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 1737, in _connections_for_states\n    connection = uowtransaction.transaction.connection(base_mapper)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 626, in connection\n    return self._connection_for_bind(bind, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 735, in _connection_for_bind\n    conn = self._parent._connection_for_bind(bind, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: During handling of the above exception, another exception occurred:
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: Traceback (most recent call last):
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2317, in _locked_do_build_and_run_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     result = self._do_build_and_run_instance(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 27 23:07:22 compute-0 nova_compute[185650]:     _emit_versioned_exception_notification(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self.force_reraise()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise self.value
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return f(self, context, *args, **kw)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 186, in decorated_function
Jan 27 23:07:22 compute-0 nova_compute[185650]:     LOG.warning("Failed to revert task state for instance. "
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self.force_reraise()
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise self.value
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 157, in decorated_function
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return function(self, context, *args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1439, in decorated_function
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return function(self, context, *args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1400, in __exit__
Jan 27 23:07:22 compute-0 nova_compute[185650]:     objects.InstanceActionEvent.event_finish_with_failure(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/objects/base.py", line 355, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return fn.__get__(None, obj)(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     result = cls.indirection_api.object_class_action_versions(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return cctxt.call(context, 'object_class_action_versions',
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 23:07:22 compute-0 nova_compute[185650]:     result = self.transport._send(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._driver.send(target, ctxt, message,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise result
Jan 27 23:07:22 compute-0 nova_compute[185650]: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:22 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:22 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/base.py", line 355, in wrapper\n    return fn.__get__(None, obj)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_action.py", line 227, in event_finish_with_failure\n    db_event = db.action_event_finish(context, values)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 4020, in action_event_finish\n    action = _action_get_by_request_id(context, values[\'instance_uuid\'],\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3956, in _action_get_by_request_id\n    result = model_query(context, models.InstanceAction).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: During handling of the above exception, another exception occurred:
Jan 27 23:07:22 compute-0 nova_compute[185650]: 
Jan 27 23:07:22 compute-0 nova_compute[185650]: Traceback (most recent call last):
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/eventlet/hubs/poll.py", line 111, in wait
Jan 27 23:07:22 compute-0 nova_compute[185650]:     listener.cb(fileno)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/utils.py", line 654, in context_wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return func(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py", line 414, in inner
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return f(*args, **kwargs)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2343, in _locked_do_build_and_run_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     self.reportclient.delete_allocation_for_instance(
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/scheduler/client/report.py", line 120, in wrapper
Jan 27 23:07:22 compute-0 nova_compute[185650]:     return f(self, *a, **k)
Jan 27 23:07:22 compute-0 nova_compute[185650]:   File "/usr/lib/python3.9/site-packages/nova/scheduler/client/report.py", line 2154, in delete_allocation_for_instance
Jan 27 23:07:22 compute-0 nova_compute[185650]:     raise exception.AllocationDeleteFailed(consumer_uuid=uuid,
Jan 27 23:07:22 compute-0 nova_compute[185650]: nova.exception.AllocationDeleteFailed: Failed to delete allocations for consumer dd37badf-e0f2-4ba3-b12e-f4238236f28d. Error: {"errors": [{"status": 500, "title": "Internal Server Error", "detail": "The server has either erred or is incapable of performing the requested operation.\n\n (pymysql.err.OperationalError) (2003, \"Can't connect to MySQL server on 'openstack.openstack.svc' ([Errno 111] Connection refused)\") [SQL: SELECT 1] (Background on this error at: https://sqlalche.me/e/14/e3q8)  ", "request_id": "req-97707d8a-4beb-40b6-89cf-fbc1e75b0b24"}]}
Jan 27 23:07:22 compute-0 nova_compute[185650]: Removing descriptor: 31
Jan 27 23:07:22 compute-0 rsyslogd[235951]: message too long (8744) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:22 compute-0 rsyslogd[235951]: message too long (8152) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:22 compute-0 rsyslogd[235951]: message too long (8152) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:22 compute-0 rsyslogd[235951]: message too long (8152) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:22 compute-0 rsyslogd[235951]: message too long (9544) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:22 compute-0 rsyslogd[235951]: message too long (9102) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:25 compute-0 nova_compute[185650]: 2026-01-27 23:07:25.301 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:25 compute-0 nova_compute[185650]: 2026-01-27 23:07:25.626 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:28 compute-0 podman[249928]: 2026-01-27 23:07:28.398039025 +0000 UTC m=+0.086604432 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 27 23:07:28 compute-0 podman[249929]: 2026-01-27 23:07:28.439666496 +0000 UTC m=+0.122665945 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:28 compute-0 nova_compute[185650]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:28 compute-0 nova_compute[185650]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db     raise result
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 27 23:07:28 compute-0 nova_compute[185650]: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db 
Jan 27 23:07:28 compute-0 rsyslogd[235951]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:28 compute-0 rsyslogd[235951]: message too long (9052) with configured size 8096, begin of message is: 2026-01-27 23:07:28.595 185654 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 27 23:07:29 compute-0 podman[201529]: time="2026-01-27T23:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:07:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30974 "" "Go-http-client/1.1"
Jan 27 23:07:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5311 "" "Go-http-client/1.1"
Jan 27 23:07:30 compute-0 nova_compute[185650]: 2026-01-27 23:07:30.305 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:30 compute-0 nova_compute[185650]: 2026-01-27 23:07:30.628 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:31 compute-0 openstack_network_exporter[204648]: ERROR   23:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:07:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:07:31 compute-0 openstack_network_exporter[204648]: ERROR   23:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:07:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:07:33 compute-0 podman[249979]: 2026-01-27 23:07:33.378439904 +0000 UTC m=+0.073012239 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 23:07:35 compute-0 nova_compute[185650]: 2026-01-27 23:07:35.312 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:35 compute-0 nova_compute[185650]: 2026-01-27 23:07:35.632 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.121 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.125 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.125 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c646060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f826c6475f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.126 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.128 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.129 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.129 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645490>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.130 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.130 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.130 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.130 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.131 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.131 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.131 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.131 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.131 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.131 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c645730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.131 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.131 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f826efc78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.134 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 66eb7f87-9511-4da7-8733-ef0673cfab67 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 23:07:38 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:38.136 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/66eb7f87-9511-4da7-8733-ef0673cfab67 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}154da27a0715c4500fb4356c9136f029f6352e657551e62d11427d8299e729cc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 23:07:38 compute-0 podman[250004]: 2026-01-27 23:07:38.400946937 +0000 UTC m=+0.101533701 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vendor=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, io.openshift.tags=base rhel9, config_id=kepler, version=9.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 27 23:07:38 compute-0 podman[250006]: 2026-01-27 23:07:38.403344791 +0000 UTC m=+0.096411764 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 27 23:07:38 compute-0 podman[250005]: 2026-01-27 23:07:38.452130433 +0000 UTC m=+0.140543061 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 27 23:07:38 compute-0 nova_compute[185650]: 2026-01-27 23:07:38.579 185654 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status.
Jan 27 23:07:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:40.117 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1980 Content-Type: application/json Date: Tue, 27 Jan 2026 23:07:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-41c004d7-b8bc-4b20-881b-8a902bd0de36 x-openstack-request-id: req-41c004d7-b8bc-4b20-881b-8a902bd0de36 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 23:07:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:40.118 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "66eb7f87-9511-4da7-8733-ef0673cfab67", "name": "tempest-ServerActionsTestJSON-server-817207074", "status": "ACTIVE", "tenant_id": "270690dca2514a49843b866111c87d39", "user_id": "4ed42d6c691545f987cae97bc62b185c", "metadata": {}, "hostId": "32cc34f89bc86920d0414a349db056c81d2b22ba42b190621aa69d20", "image": {"id": "319632d9-1bdd-4de0-b1d2-0507a3e91b6b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/319632d9-1bdd-4de0-b1d2-0507a3e91b6b"}]}, "flavor": {"id": "d732a0b9-79cd-4ff7-8741-11ae188a8b69", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/d732a0b9-79cd-4ff7-8741-11ae188a8b69"}]}, "created": "2026-01-27T23:05:36Z", "updated": "2026-01-27T23:05:52Z", "addresses": {"tempest-ServerActionsTestJSON-1504245290-network": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:23:60:c6"}, {"version": 4, "addr": "192.168.122.193", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:23:60:c6"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/66eb7f87-9511-4da7-8733-ef0673cfab67"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/66eb7f87-9511-4da7-8733-ef0673cfab67"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1751861480", "OS-SRV-USG:launched_at": "2026-01-27T23:05:52.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1555115966"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 23:07:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:40.118 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/66eb7f87-9511-4da7-8733-ef0673cfab67 used request id req-41c004d7-b8bc-4b20-881b-8a902bd0de36 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 23:07:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:40.119 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '66eb7f87-9511-4da7-8733-ef0673cfab67', 'name': 'tempest-ServerActionsTestJSON-server-817207074', 'flavor': {'id': 'd732a0b9-79cd-4ff7-8741-11ae188a8b69', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '319632d9-1bdd-4de0-b1d2-0507a3e91b6b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '270690dca2514a49843b866111c87d39', 'user_id': '4ed42d6c691545f987cae97bc62b185c', 'hostId': '32cc34f89bc86920d0414a349db056c81d2b22ba42b190621aa69d20', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 23:07:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:40.123 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 6e4e7f3d-60d3-49cf-b7be-e93194c45a44 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 23:07:40 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:40.124 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/6e4e7f3d-60d3-49cf-b7be-e93194c45a44 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}154da27a0715c4500fb4356c9136f029f6352e657551e62d11427d8299e729cc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 23:07:40 compute-0 nova_compute[185650]: 2026-01-27 23:07:40.323 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:40 compute-0 nova_compute[185650]: 2026-01-27 23:07:40.636 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:41 compute-0 ovn_controller[98048]: 2026-01-27T23:07:41Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:26:ef 10.100.0.7
Jan 27 23:07:41 compute-0 ovn_controller[98048]: 2026-01-27T23:07:41Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:26:ef 10.100.0.7
Jan 27 23:07:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:41.558 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1897 Content-Type: application/json Date: Tue, 27 Jan 2026 23:07:40 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3a742ffe-7ee2-4123-a80e-b83ceba2e735 x-openstack-request-id: req-3a742ffe-7ee2-4123-a80e-b83ceba2e735 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 23:07:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:41.558 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "6e4e7f3d-60d3-49cf-b7be-e93194c45a44", "name": "tempest-ServersTestManualDisk-server-1823238454", "status": "ACTIVE", "tenant_id": "96e79f52da2341129f0c6e2459dae69d", "user_id": "ea2353d747c04d31940685f5b6330baa", "metadata": {"hello": "world"}, "hostId": "6d9bec8ecba9e301a8068afe29c327f71e76909ea995c03afb5c8120", "image": {"id": "319632d9-1bdd-4de0-b1d2-0507a3e91b6b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/319632d9-1bdd-4de0-b1d2-0507a3e91b6b"}]}, "flavor": {"id": "d732a0b9-79cd-4ff7-8741-11ae188a8b69", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/d732a0b9-79cd-4ff7-8741-11ae188a8b69"}]}, "created": "2026-01-27T23:06:44Z", "updated": "2026-01-27T23:07:05Z", "addresses": {"tempest-ServersTestManualDisk-1392185424-network": [{"version": 4, "addr": "10.100.0.7", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fc:26:ef"}]}, "accessIPv4": "1.1.1.1", "accessIPv6": "::babe:dc0c:1602", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/6e4e7f3d-60d3-49cf-b7be-e93194c45a44"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/6e4e7f3d-60d3-49cf-b7be-e93194c45a44"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-675614874", "OS-SRV-USG:launched_at": "2026-01-27T23:07:05.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1527261139"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 23:07:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:41.559 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/6e4e7f3d-60d3-49cf-b7be-e93194c45a44 used request id req-3a742ffe-7ee2-4123-a80e-b83ceba2e735 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 23:07:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:41.565 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '6e4e7f3d-60d3-49cf-b7be-e93194c45a44', 'name': 'tempest-ServersTestManualDisk-server-1823238454', 'flavor': {'id': 'd732a0b9-79cd-4ff7-8741-11ae188a8b69', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '319632d9-1bdd-4de0-b1d2-0507a3e91b6b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '96e79f52da2341129f0c6e2459dae69d', 'user_id': 'ea2353d747c04d31940685f5b6330baa', 'hostId': '6d9bec8ecba9e301a8068afe29c327f71e76909ea995c03afb5c8120', 'status': 'active', 'metadata': {'hello': 'world'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 23:07:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:41.568 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 9033d5a6-ab60-43e3-bbcb-3a8b83161c58 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 27 23:07:41 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:41.569 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/9033d5a6-ab60-43e3-bbcb-3a8b83161c58 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}154da27a0715c4500fb4356c9136f029f6352e657551e62d11427d8299e729cc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.319 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1996 Content-Type: application/json Date: Tue, 27 Jan 2026 23:07:41 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-257edb77-cedc-4bdc-8410-c55bd44330c2 x-openstack-request-id: req-257edb77-cedc-4bdc-8410-c55bd44330c2 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.320 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "9033d5a6-ab60-43e3-bbcb-3a8b83161c58", "name": "tempest-AttachInterfacesUnderV243Test-server-1437890012", "status": "ACTIVE", "tenant_id": "74f54dfa359341ba8894a95865378d18", "user_id": "39e9f4625e8b494b9682d5622bf1b206", "metadata": {}, "hostId": "b02d0269dfd72a1698ba789fca3d27d035508ffcba7cdea7877418a4", "image": {"id": "319632d9-1bdd-4de0-b1d2-0507a3e91b6b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/319632d9-1bdd-4de0-b1d2-0507a3e91b6b"}]}, "flavor": {"id": "d732a0b9-79cd-4ff7-8741-11ae188a8b69", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/d732a0b9-79cd-4ff7-8741-11ae188a8b69"}]}, "created": "2026-01-27T23:05:38Z", "updated": "2026-01-27T23:05:55Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-161936656-network": [{"version": 4, "addr": "10.100.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:81:28:a4"}, {"version": 4, "addr": "192.168.122.185", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:81:28:a4"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/9033d5a6-ab60-43e3-bbcb-3a8b83161c58"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/9033d5a6-ab60-43e3-bbcb-3a8b83161c58"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1401102305", "OS-SRV-USG:launched_at": "2026-01-27T23:05:55.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--585722776"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.320 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/9033d5a6-ab60-43e3-bbcb-3a8b83161c58 used request id req-257edb77-cedc-4bdc-8410-c55bd44330c2 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.322 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9033d5a6-ab60-43e3-bbcb-3a8b83161c58', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1437890012', 'flavor': {'id': 'd732a0b9-79cd-4ff7-8741-11ae188a8b69', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '319632d9-1bdd-4de0-b1d2-0507a3e91b6b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '74f54dfa359341ba8894a95865378d18', 'user_id': '39e9f4625e8b494b9682d5622bf1b206', 'hostId': 'b02d0269dfd72a1698ba789fca3d27d035508ffcba7cdea7877418a4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.322 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c646060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.323 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c646060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-27T23:07:43.323302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.330 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 66eb7f87-9511-4da7-8733-ef0673cfab67 / tap64b86a6b-6d inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.330 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.334 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 6e4e7f3d-60d3-49cf-b7be-e93194c45a44 / tap2621603a-64 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.335 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.341 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 9033d5a6-ab60-43e3-bbcb-3a8b83161c58 / tap5c31fe8e-f9 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.342 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f826c645dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.344 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.345 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.345 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-27T23:07:43.344981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.346 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-817207074>, <NovaLikeServer: tempest-ServersTestManualDisk-server-1823238454>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1437890012>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-817207074>, <NovaLikeServer: tempest-ServersTestManualDisk-server-1823238454>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1437890012>]
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f826c647800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.346 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6440b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.348 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-27T23:07:43.347564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.348 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.348 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.349 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f826c647650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.350 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.350 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6459a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.350 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.351 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-27T23:07:43.350930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.351 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.352 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.352 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f826c645640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.353 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-27T23:07:43.354124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.421 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.write.latency volume: 7707786171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.421 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.466 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.write.latency volume: 3248004144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.467 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.517 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.write.latency volume: 5709434938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.518 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.518 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f826c8ae7e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.519 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826e38aab0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.520 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/network.incoming.bytes volume: 4475 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-27T23:07:43.520212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.521 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.522 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.522 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f826c645a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.523 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.523 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-27T23:07:43.524507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.524 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.write.requests volume: 301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.525 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.525 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.write.requests volume: 307 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.526 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.526 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.write.requests volume: 296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.526 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f826c6462a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.527 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6462d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.528 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-27T23:07:43.528544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.529 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.529 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f826c647f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.530 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c8c52e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.531 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-27T23:07:43.531098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.565 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/cpu volume: 35530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 nova_compute[185650]: 2026-01-27 23:07:43.584 185654 DEBUG nova.network.neutron [None req-d961fe3b-8891-44b9-b149-ad73d5181319 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.594 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/cpu volume: 35340000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.622 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/cpu volume: 36720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f826c645af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.623 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f826c645d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.624 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826ee82330>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-27T23:07:43.623219) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.624 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/memory.usage volume: 42.7734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.625 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/memory.usage volume: 40.4765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.625 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/memory.usage volume: 42.23828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.625 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f826c645b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.626 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.626 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-27T23:07:43.624617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.626 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.626 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f826c644a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.627 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-27T23:07:43.626538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-27T23:07:43.627677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.650 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.651 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.669 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.669 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.685 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.686 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.687 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f826c6453a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.687 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645490>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645490>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.688 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.read.bytes volume: 31009280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-27T23:07:43.688263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.689 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.689 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.read.bytes volume: 30755328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.689 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.690 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.read.bytes volume: 31025664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.690 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.691 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f826c6454c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.691 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6454f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.692 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.read.latency volume: 1184821092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-27T23:07:43.692320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.692 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.read.latency volume: 102310343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.693 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.read.latency volume: 1096374683 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.693 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.read.latency volume: 78383295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.693 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.read.latency volume: 2367171311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.694 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.read.latency volume: 96324834 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.694 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f826c645520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.695 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.695 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.696 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.read.requests volume: 1133 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-27T23:07:43.695993) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.696 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.696 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.read.requests volume: 1110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.697 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.697 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.read.requests volume: 1137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.698 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f826c645d90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.699 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.699 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.699 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.700 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-27T23:07:43.699683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.700 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.700 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.700 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f826c646570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.701 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6465a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.702 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.702 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-27T23:07:43.702436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.703 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.703 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.703 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f826c645580>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.704 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6455b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.705 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-27T23:07:43.705020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.705 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.705 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.705 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.usage volume: 29753344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.706 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.706 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.707 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.707 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f826c6455e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.708 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-27T23:07:43.708599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.708 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.write.bytes volume: 73097216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.709 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.709 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.write.bytes volume: 72777728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.709 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.710 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.write.bytes volume: 73117696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.710 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f826c644050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.711 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-27T23:07:43.712134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.712 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.712 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.713 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f826c647860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.715 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.716 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/network.outgoing.bytes volume: 3456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-27T23:07:43.716157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.716 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/network.outgoing.bytes volume: 1284 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.717 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f826c6476e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.718 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.719 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-27T23:07:43.719084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.719 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.720 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.720 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f826c6456a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.720 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c645730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c645730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.721 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.721 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/network.outgoing.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-27T23:07:43.721552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.722 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/network.outgoing.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.722 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f826f277b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.723 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c647f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.724 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-27T23:07:43.724149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.724 14 DEBUG ceilometer.compute.pollsters [-] 66eb7f87-9511-4da7-8733-ef0673cfab67/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.725 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.allocation volume: 30679040 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.725 14 DEBUG ceilometer.compute.pollsters [-] 6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.725 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.726 14 DEBUG ceilometer.compute.pollsters [-] 9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f826c647770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f826c8c4b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.727 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f826c6477a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-27T23:07:43.727674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.727 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.728 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-817207074>, <NovaLikeServer: tempest-ServersTestManualDisk-server-1823238454>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1437890012>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-817207074>, <NovaLikeServer: tempest-ServersTestManualDisk-server-1823238454>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1437890012>]
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.728 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.728 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.729 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 ceilometer_agent_compute[195354]: 2026-01-27 23:07:43.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 27 23:07:43 compute-0 nova_compute[185650]: 2026-01-27 23:07:43.763 185654 DEBUG nova.compute.manager [req-c2e5e707-8247-49c2-a4d9-292aca0ceb3f req-48af84f4-7912-4982-ba48-d156a53800d4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Received event network-changed-5c31fe8e-f952-4e71-b32a-ec4759a7fc07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 27 23:07:43 compute-0 nova_compute[185650]: 2026-01-27 23:07:43.764 185654 DEBUG nova.compute.manager [req-c2e5e707-8247-49c2-a4d9-292aca0ceb3f req-48af84f4-7912-4982-ba48-d156a53800d4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Refreshing instance network info cache due to event network-changed-5c31fe8e-f952-4e71-b32a-ec4759a7fc07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 27 23:07:43 compute-0 nova_compute[185650]: 2026-01-27 23:07:43.764 185654 DEBUG oslo_concurrency.lockutils [req-c2e5e707-8247-49c2-a4d9-292aca0ceb3f req-48af84f4-7912-4982-ba48-d156a53800d4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquiring lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:07:45 compute-0 nova_compute[185650]: 2026-01-27 23:07:45.335 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:45 compute-0 nova_compute[185650]: 2026-01-27 23:07:45.638 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:47 compute-0 podman[250075]: 2026-01-27 23:07:47.407990731 +0000 UTC m=+0.108878237 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 27 23:07:47 compute-0 nova_compute[185650]: 2026-01-27 23:07:47.752 185654 DEBUG nova.network.neutron [None req-d961fe3b-8891-44b9-b149-ad73d5181319 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Updating instance_info_cache with network_info: [{"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:07:47 compute-0 nova_compute[185650]: 2026-01-27 23:07:47.777 185654 DEBUG oslo_concurrency.lockutils [None req-d961fe3b-8891-44b9-b149-ad73d5181319 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] Releasing lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:07:47 compute-0 nova_compute[185650]: 2026-01-27 23:07:47.778 185654 DEBUG nova.compute.manager [None req-d961fe3b-8891-44b9-b149-ad73d5181319 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 27 23:07:47 compute-0 nova_compute[185650]: 2026-01-27 23:07:47.778 185654 DEBUG nova.compute.manager [None req-d961fe3b-8891-44b9-b149-ad73d5181319 39e9f4625e8b494b9682d5622bf1b206 74f54dfa359341ba8894a95865378d18 - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] network_info to inject: |[{"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 27 23:07:47 compute-0 nova_compute[185650]: 2026-01-27 23:07:47.781 185654 DEBUG oslo_concurrency.lockutils [req-c2e5e707-8247-49c2-a4d9-292aca0ceb3f req-48af84f4-7912-4982-ba48-d156a53800d4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Acquired lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:07:47 compute-0 nova_compute[185650]: 2026-01-27 23:07:47.781 185654 DEBUG nova.network.neutron [req-c2e5e707-8247-49c2-a4d9-292aca0ceb3f req-48af84f4-7912-4982-ba48-d156a53800d4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Refreshing network info cache for port 5c31fe8e-f952-4e71-b32a-ec4759a7fc07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 27 23:07:49 compute-0 nova_compute[185650]: 2026-01-27 23:07:49.932 185654 DEBUG nova.network.neutron [req-c2e5e707-8247-49c2-a4d9-292aca0ceb3f req-48af84f4-7912-4982-ba48-d156a53800d4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Updated VIF entry in instance network info cache for port 5c31fe8e-f952-4e71-b32a-ec4759a7fc07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 27 23:07:49 compute-0 nova_compute[185650]: 2026-01-27 23:07:49.933 185654 DEBUG nova.network.neutron [req-c2e5e707-8247-49c2-a4d9-292aca0ceb3f req-48af84f4-7912-4982-ba48-d156a53800d4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] [instance: 9033d5a6-ab60-43e3-bbcb-3a8b83161c58] Updating instance_info_cache with network_info: [{"id": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "address": "fa:16:3e:81:28:a4", "network": {"id": "b56ee5fa-e690-4d9b-a6e1-7815589f421e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-161936656-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74f54dfa359341ba8894a95865378d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c31fe8e-f9", "ovs_interfaceid": "5c31fe8e-f952-4e71-b32a-ec4759a7fc07", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:07:49 compute-0 nova_compute[185650]: 2026-01-27 23:07:49.953 185654 DEBUG oslo_concurrency.lockutils [req-c2e5e707-8247-49c2-a4d9-292aca0ceb3f req-48af84f4-7912-4982-ba48-d156a53800d4 b6e1d89fbb5f4313b8ea259bf5720310 9e79929cb4ce4fe29ff65f34c447ae0e - - default default] Releasing lock "refresh_cache-9033d5a6-ab60-43e3-bbcb-3a8b83161c58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:07:50 compute-0 nova_compute[185650]: 2026-01-27 23:07:50.338 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:50 compute-0 podman[250097]: 2026-01-27 23:07:50.438600513 +0000 UTC m=+0.120335322 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 27 23:07:50 compute-0 nova_compute[185650]: 2026-01-27 23:07:50.640 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:55 compute-0 nova_compute[185650]: 2026-01-27 23:07:55.345 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:55 compute-0 nova_compute[185650]: 2026-01-27 23:07:55.643 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:07:59 compute-0 podman[250117]: 2026-01-27 23:07:59.404279611 +0000 UTC m=+0.082476285 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 27 23:07:59 compute-0 podman[250118]: 2026-01-27 23:07:59.435286359 +0000 UTC m=+0.096014086 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 27 23:07:59 compute-0 podman[201529]: time="2026-01-27T23:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:07:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30974 "" "Go-http-client/1.1"
Jan 27 23:07:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5310 "" "Go-http-client/1.1"
Jan 27 23:08:00 compute-0 nova_compute[185650]: 2026-01-27 23:08:00.354 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:00 compute-0 nova_compute[185650]: 2026-01-27 23:08:00.647 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:01 compute-0 openstack_network_exporter[204648]: ERROR   23:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:08:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:08:01 compute-0 openstack_network_exporter[204648]: ERROR   23:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:08:01 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:08:03 compute-0 nova_compute[185650]: 2026-01-27 23:08:03.993 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.033 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.034 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.035 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.035 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 27 23:08:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:08:04.165 107302 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:08:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:08:04.166 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:08:04 compute-0 ovn_metadata_agent[107297]: 2026-01-27 23:08:04.167 107302 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.171 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.249 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.251 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.349 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66eb7f87-9511-4da7-8733-ef0673cfab67/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.358 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:08:04 compute-0 podman[250156]: 2026-01-27 23:08:04.371019222 +0000 UTC m=+0.073801533 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.439 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.441 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.524 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6e4e7f3d-60d3-49cf-b7be-e93194c45a44/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.533 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.607 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.608 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 27 23:08:04 compute-0 nova_compute[185650]: 2026-01-27 23:08:04.677 185654 DEBUG oslo_concurrency.processutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9033d5a6-ab60-43e3-bbcb-3a8b83161c58/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.095 185654 WARNING nova.virt.libvirt.driver [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.096 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4850MB free_disk=72.28801727294922GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.097 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.097 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.358 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.413 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 66eb7f87-9511-4da7-8733-ef0673cfab67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.413 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 9033d5a6-ab60-43e3-bbcb-3a8b83161c58 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.414 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance 6e4e7f3d-60d3-49cf-b7be-e93194c45a44 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.414 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Instance dd37badf-e0f2-4ba3-b12e-f4238236f28d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.414 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.415 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.547 185654 DEBUG nova.compute.provider_tree [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed in ProviderTree for provider: 200c8b8b-d176-4e2d-a773-1ed54a9635a3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.564 185654 DEBUG nova.scheduler.client.report [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Inventory has not changed for provider 200c8b8b-d176-4e2d-a773-1ed54a9635a3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.585 185654 DEBUG nova.compute.resource_tracker [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.585 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 27 23:08:05 compute-0 nova_compute[185650]: 2026-01-27 23:08:05.650 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:09 compute-0 podman[250193]: 2026-01-27 23:08:09.393853073 +0000 UTC m=+0.089593214 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=kepler, name=ubi9, version=9.4, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=)
Jan 27 23:08:09 compute-0 podman[250195]: 2026-01-27 23:08:09.416504078 +0000 UTC m=+0.104994486 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 27 23:08:09 compute-0 podman[250194]: 2026-01-27 23:08:09.459525317 +0000 UTC m=+0.147282145 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 27 23:08:10 compute-0 nova_compute[185650]: 2026-01-27 23:08:10.364 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:10 compute-0 nova_compute[185650]: 2026-01-27 23:08:10.585 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:08:10 compute-0 nova_compute[185650]: 2026-01-27 23:08:10.586 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 27 23:08:10 compute-0 nova_compute[185650]: 2026-01-27 23:08:10.586 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 27 23:08:10 compute-0 nova_compute[185650]: 2026-01-27 23:08:10.607 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: dd37badf-e0f2-4ba3-b12e-f4238236f28d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 27 23:08:10 compute-0 nova_compute[185650]: 2026-01-27 23:08:10.653 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:11 compute-0 nova_compute[185650]: 2026-01-27 23:08:11.762 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquiring lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 27 23:08:11 compute-0 nova_compute[185650]: 2026-01-27 23:08:11.762 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Acquired lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 27 23:08:11 compute-0 nova_compute[185650]: 2026-01-27 23:08:11.763 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 27 23:08:11 compute-0 nova_compute[185650]: 2026-01-27 23:08:11.764 185654 DEBUG nova.objects.instance [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 66eb7f87-9511-4da7-8733-ef0673cfab67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 27 23:08:13 compute-0 ovn_controller[98048]: 2026-01-27T23:08:13Z|00115|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 27 23:08:13 compute-0 nova_compute[185650]: 2026-01-27 23:08:13.981 185654 DEBUG nova.network.neutron [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Updating instance_info_cache with network_info: [{"id": "64b86a6b-6de4-4fee-917e-229794042e8e", "address": "fa:16:3e:23:60:c6", "network": {"id": "6d0f9d9e-8cd6-4a68-8926-de88e69f60d4", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1504245290-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "270690dca2514a49843b866111c87d39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b86a6b-6d", "ovs_interfaceid": "64b86a6b-6de4-4fee-917e-229794042e8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 27 23:08:14 compute-0 nova_compute[185650]: 2026-01-27 23:08:14.008 185654 DEBUG oslo_concurrency.lockutils [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Releasing lock "refresh_cache-66eb7f87-9511-4da7-8733-ef0673cfab67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 27 23:08:14 compute-0 nova_compute[185650]: 2026-01-27 23:08:14.009 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] [instance: 66eb7f87-9511-4da7-8733-ef0673cfab67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 27 23:08:14 compute-0 nova_compute[185650]: 2026-01-27 23:08:14.011 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:08:14 compute-0 nova_compute[185650]: 2026-01-27 23:08:14.012 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:08:14 compute-0 nova_compute[185650]: 2026-01-27 23:08:14.012 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:08:14 compute-0 nova_compute[185650]: 2026-01-27 23:08:14.012 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:08:14 compute-0 nova_compute[185650]: 2026-01-27 23:08:14.014 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:08:14 compute-0 nova_compute[185650]: 2026-01-27 23:08:14.014 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:08:14 compute-0 nova_compute[185650]: 2026-01-27 23:08:14.015 185654 DEBUG nova.compute.manager [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 27 23:08:15 compute-0 nova_compute[185650]: 2026-01-27 23:08:15.370 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:15 compute-0 nova_compute[185650]: 2026-01-27 23:08:15.656 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:18 compute-0 podman[250257]: 2026-01-27 23:08:18.393760425 +0000 UTC m=+0.085436734 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 27 23:08:20 compute-0 nova_compute[185650]: 2026-01-27 23:08:20.376 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:20 compute-0 nova_compute[185650]: 2026-01-27 23:08:20.659 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:21 compute-0 podman[250282]: 2026-01-27 23:08:21.400833285 +0000 UTC m=+0.088965567 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_id=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible)
Jan 27 23:08:23 compute-0 nova_compute[185650]: 2026-01-27 23:08:23.417 185654 DEBUG oslo_service.periodic_task [None req-c702579e-f7d6-42e7-b3c7-85fc7207400b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 27 23:08:25 compute-0 nova_compute[185650]: 2026-01-27 23:08:25.382 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:25 compute-0 nova_compute[185650]: 2026-01-27 23:08:25.662 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:29 compute-0 podman[201529]: time="2026-01-27T23:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:08:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30974 "" "Go-http-client/1.1"
Jan 27 23:08:29 compute-0 podman[201529]: @ - - [27/Jan/2026:23:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5318 "" "Go-http-client/1.1"
Jan 27 23:08:30 compute-0 nova_compute[185650]: 2026-01-27 23:08:30.386 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:30 compute-0 podman[250306]: 2026-01-27 23:08:30.400075992 +0000 UTC m=+0.087825276 container health_status 7c807bf92e5e62221cb7f82bb0092b6eb64dbc0f8942efae4eb3cf52d8ef0617 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=d987a3aaaf2d3bd4a150aba0b9a20c40, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 27 23:08:30 compute-0 podman[250305]: 2026-01-27 23:08:30.406827123 +0000 UTC m=+0.089910312 container health_status 70801378de94128726be2a6e292cf0217436f7c8e3448ce02566813e0a2178cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 27 23:08:30 compute-0 nova_compute[185650]: 2026-01-27 23:08:30.665 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:31 compute-0 openstack_network_exporter[204648]: ERROR   23:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 27 23:08:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:08:31 compute-0 openstack_network_exporter[204648]: ERROR   23:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 27 23:08:31 compute-0 openstack_network_exporter[204648]: 
Jan 27 23:08:35 compute-0 nova_compute[185650]: 2026-01-27 23:08:35.392 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:35 compute-0 podman[250340]: 2026-01-27 23:08:35.411505838 +0000 UTC m=+0.109610309 container health_status 245b08a2c0cf3af08cf89466a1d24173e0e0a593018d37442c9c37d99bc3907b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 27 23:08:35 compute-0 nova_compute[185650]: 2026-01-27 23:08:35.669 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:39 compute-0 sshd-session[250364]: Accepted publickey for zuul from 192.168.122.10 port 48174 ssh2: ECDSA SHA256:f2siSFgqhRl+V43NMPJ82N3mZUylXFtu0KAbYfQTK7A
Jan 27 23:08:39 compute-0 systemd-logind[789]: New session 31 of user zuul.
Jan 27 23:08:39 compute-0 systemd[1]: Started Session 31 of User zuul.
Jan 27 23:08:39 compute-0 sshd-session[250364]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 27 23:08:39 compute-0 sudo[250368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 27 23:08:39 compute-0 sudo[250368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 27 23:08:39 compute-0 podman[250394]: 2026-01-27 23:08:39.717595681 +0000 UTC m=+0.113548694 container health_status d32b98ebd25ce05e625d8d64e6eca39764ddc176b5c5f1d01ed693b2328c2236 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi)
Jan 27 23:08:39 compute-0 podman[250392]: 2026-01-27 23:08:39.723247572 +0000 UTC m=+0.121809235 container health_status 0025921e7f27ce56ed1a5f82f52d4b8bc26d0d679dcf308f3fed630272c7d650 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, config_id=kepler, container_name=kepler, managed_by=edpm_ansible, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Jan 27 23:08:39 compute-0 podman[250393]: 2026-01-27 23:08:39.755995807 +0000 UTC m=+0.154314513 container health_status 5c18c36ffb633d117e19903069e2a8f5915c81a4312fb2a5426aa3fb5e2b5f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f2995fc5397da7062fb8005b462f38a22f4de9a4a9eafd0dde78d21dd14d1583-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 27 23:08:40 compute-0 nova_compute[185650]: 2026-01-27 23:08:40.396 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:40 compute-0 nova_compute[185650]: 2026-01-27 23:08:40.671 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:45 compute-0 nova_compute[185650]: 2026-01-27 23:08:45.401 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:45 compute-0 ovs-vsctl[250610]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 27 23:08:45 compute-0 nova_compute[185650]: 2026-01-27 23:08:45.673 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:46 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 250395 (sos)
Jan 27 23:08:46 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 27 23:08:46 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 27 23:08:47 compute-0 virtqemud[185375]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 27 23:08:47 compute-0 virtqemud[185375]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 27 23:08:47 compute-0 virtqemud[185375]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 27 23:08:48 compute-0 crontab[251037]: (root) LIST (root)
Jan 27 23:08:49 compute-0 podman[251085]: 2026-01-27 23:08:49.417134436 +0000 UTC m=+0.106235599 container health_status f1a3592dd8977f41c360d3ff3d816e94fbacf395c3131f4241dbbc9e8f1745de (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 27 23:08:50 compute-0 nova_compute[185650]: 2026-01-27 23:08:50.405 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:50 compute-0 nova_compute[185650]: 2026-01-27 23:08:50.676 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:51 compute-0 systemd[1]: Starting Hostname Service...
Jan 27 23:08:51 compute-0 podman[251164]: 2026-01-27 23:08:51.737050131 +0000 UTC m=+0.115793714 container health_status b1571fccf142aed38618277362e0b9e69fb588c44f3370bdbb3a19fdd54e4372 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '7e81283119fcbb6bdd3f35336317166b6162a9731e45e4be8d149d2efe82a530-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6)
Jan 27 23:08:51 compute-0 systemd[1]: Started Hostname Service.
Jan 27 23:08:55 compute-0 nova_compute[185650]: 2026-01-27 23:08:55.410 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:55 compute-0 nova_compute[185650]: 2026-01-27 23:08:55.681 185654 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 27 23:08:59 compute-0 podman[201529]: time="2026-01-27T23:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 27 23:08:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30974 "" "Go-http-client/1.1"
Jan 27 23:08:59 compute-0 podman[201529]: @ - - [27/Jan/2026:23:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5312 "" "Go-http-client/1.1"
